diff --git a/website/route-lockfile.txt b/website/route-lockfile.txt
index 216b53cf5d60..3375c692ab0c 100644
--- a/website/route-lockfile.txt
+++ b/website/route-lockfile.txt
@@ -62,6 +62,7 @@
/ar/subgraphs/guides/near/
/ar/subgraphs/guides/polymarket/
/ar/subgraphs/guides/secure-api-keys-nextjs/
+/ar/subgraphs/guides/subgraph-composition/
/ar/subgraphs/guides/subgraph-debug-forking/
/ar/subgraphs/guides/subgraph-uncrashable/
/ar/subgraphs/guides/transfer-to-the-graph/
@@ -104,6 +105,7 @@
/ar/supported-networks/blast-mainnet/
/ar/supported-networks/blast-testnet/
/ar/supported-networks/bnb-op/
+/ar/supported-networks/bnb-svm/
/ar/supported-networks/boba-bnb-testnet/
/ar/supported-networks/boba-bnb/
/ar/supported-networks/boba-testnet/
@@ -158,6 +160,7 @@
/ar/supported-networks/kaia/
/ar/supported-networks/kylin/
/ar/supported-networks/lens-testnet/
+/ar/supported-networks/lens/
/ar/supported-networks/linea-sepolia/
/ar/supported-networks/linea/
/ar/supported-networks/litecoin/
@@ -188,6 +191,7 @@
/ar/supported-networks/polygon-amoy/
/ar/supported-networks/polygon-zkevm-cardona/
/ar/supported-networks/polygon-zkevm/
+/ar/supported-networks/ronin/
/ar/supported-networks/rootstock-testnet/
/ar/supported-networks/rootstock/
/ar/supported-networks/scroll-sepolia/
@@ -204,10 +208,13 @@
/ar/supported-networks/sonic/
/ar/supported-networks/starknet-mainnet/
/ar/supported-networks/starknet-testnet/
+/ar/supported-networks/stellar-testnet/
+/ar/supported-networks/stellar/
/ar/supported-networks/swellchain-sepolia/
/ar/supported-networks/swellchain/
/ar/supported-networks/telos-testnet/
/ar/supported-networks/telos/
+/ar/supported-networks/ultra/
/ar/supported-networks/unichain-testnet/
/ar/supported-networks/unichain/
/ar/supported-networks/vana-moksha/
@@ -228,6 +235,7 @@
/ar/token-api/evm/get-ohlc-prices-evm-by-contract/
/ar/token-api/evm/get-tokens-evm-by-contract/
/ar/token-api/evm/get-transfers-evm-by-address/
+/ar/token-api/faq/
/ar/token-api/mcp/claude/
/ar/token-api/mcp/cline/
/ar/token-api/mcp/cursor/
@@ -298,6 +306,7 @@
/cs/subgraphs/guides/near/
/cs/subgraphs/guides/polymarket/
/cs/subgraphs/guides/secure-api-keys-nextjs/
+/cs/subgraphs/guides/subgraph-composition/
/cs/subgraphs/guides/subgraph-debug-forking/
/cs/subgraphs/guides/subgraph-uncrashable/
/cs/subgraphs/guides/transfer-to-the-graph/
@@ -340,6 +349,7 @@
/cs/supported-networks/blast-mainnet/
/cs/supported-networks/blast-testnet/
/cs/supported-networks/bnb-op/
+/cs/supported-networks/bnb-svm/
/cs/supported-networks/boba-bnb-testnet/
/cs/supported-networks/boba-bnb/
/cs/supported-networks/boba-testnet/
@@ -394,6 +404,7 @@
/cs/supported-networks/kaia/
/cs/supported-networks/kylin/
/cs/supported-networks/lens-testnet/
+/cs/supported-networks/lens/
/cs/supported-networks/linea-sepolia/
/cs/supported-networks/linea/
/cs/supported-networks/litecoin/
@@ -424,6 +435,7 @@
/cs/supported-networks/polygon-amoy/
/cs/supported-networks/polygon-zkevm-cardona/
/cs/supported-networks/polygon-zkevm/
+/cs/supported-networks/ronin/
/cs/supported-networks/rootstock-testnet/
/cs/supported-networks/rootstock/
/cs/supported-networks/scroll-sepolia/
@@ -440,10 +452,13 @@
/cs/supported-networks/sonic/
/cs/supported-networks/starknet-mainnet/
/cs/supported-networks/starknet-testnet/
+/cs/supported-networks/stellar-testnet/
+/cs/supported-networks/stellar/
/cs/supported-networks/swellchain-sepolia/
/cs/supported-networks/swellchain/
/cs/supported-networks/telos-testnet/
/cs/supported-networks/telos/
+/cs/supported-networks/ultra/
/cs/supported-networks/unichain-testnet/
/cs/supported-networks/unichain/
/cs/supported-networks/vana-moksha/
@@ -464,6 +479,7 @@
/cs/token-api/evm/get-ohlc-prices-evm-by-contract/
/cs/token-api/evm/get-tokens-evm-by-contract/
/cs/token-api/evm/get-transfers-evm-by-address/
+/cs/token-api/faq/
/cs/token-api/mcp/claude/
/cs/token-api/mcp/cline/
/cs/token-api/mcp/cursor/
@@ -534,6 +550,7 @@
/de/subgraphs/guides/near/
/de/subgraphs/guides/polymarket/
/de/subgraphs/guides/secure-api-keys-nextjs/
+/de/subgraphs/guides/subgraph-composition/
/de/subgraphs/guides/subgraph-debug-forking/
/de/subgraphs/guides/subgraph-uncrashable/
/de/subgraphs/guides/transfer-to-the-graph/
@@ -576,6 +593,7 @@
/de/supported-networks/blast-mainnet/
/de/supported-networks/blast-testnet/
/de/supported-networks/bnb-op/
+/de/supported-networks/bnb-svm/
/de/supported-networks/boba-bnb-testnet/
/de/supported-networks/boba-bnb/
/de/supported-networks/boba-testnet/
@@ -630,6 +648,7 @@
/de/supported-networks/kaia/
/de/supported-networks/kylin/
/de/supported-networks/lens-testnet/
+/de/supported-networks/lens/
/de/supported-networks/linea-sepolia/
/de/supported-networks/linea/
/de/supported-networks/litecoin/
@@ -660,6 +679,7 @@
/de/supported-networks/polygon-amoy/
/de/supported-networks/polygon-zkevm-cardona/
/de/supported-networks/polygon-zkevm/
+/de/supported-networks/ronin/
/de/supported-networks/rootstock-testnet/
/de/supported-networks/rootstock/
/de/supported-networks/scroll-sepolia/
@@ -676,10 +696,13 @@
/de/supported-networks/sonic/
/de/supported-networks/starknet-mainnet/
/de/supported-networks/starknet-testnet/
+/de/supported-networks/stellar-testnet/
+/de/supported-networks/stellar/
/de/supported-networks/swellchain-sepolia/
/de/supported-networks/swellchain/
/de/supported-networks/telos-testnet/
/de/supported-networks/telos/
+/de/supported-networks/ultra/
/de/supported-networks/unichain-testnet/
/de/supported-networks/unichain/
/de/supported-networks/vana-moksha/
@@ -700,6 +723,7 @@
/de/token-api/evm/get-ohlc-prices-evm-by-contract/
/de/token-api/evm/get-tokens-evm-by-contract/
/de/token-api/evm/get-transfers-evm-by-address/
+/de/token-api/faq/
/de/token-api/mcp/claude/
/de/token-api/mcp/cline/
/de/token-api/mcp/cursor/
@@ -770,6 +794,7 @@
/en/subgraphs/guides/near/
/en/subgraphs/guides/polymarket/
/en/subgraphs/guides/secure-api-keys-nextjs/
+/en/subgraphs/guides/subgraph-composition/
/en/subgraphs/guides/subgraph-debug-forking/
/en/subgraphs/guides/subgraph-uncrashable/
/en/subgraphs/guides/transfer-to-the-graph/
@@ -812,6 +837,7 @@
/en/supported-networks/blast-mainnet/
/en/supported-networks/blast-testnet/
/en/supported-networks/bnb-op/
+/en/supported-networks/bnb-svm/
/en/supported-networks/boba-bnb-testnet/
/en/supported-networks/boba-bnb/
/en/supported-networks/boba-testnet/
@@ -866,6 +892,7 @@
/en/supported-networks/kaia/
/en/supported-networks/kylin/
/en/supported-networks/lens-testnet/
+/en/supported-networks/lens/
/en/supported-networks/linea-sepolia/
/en/supported-networks/linea/
/en/supported-networks/litecoin/
@@ -896,6 +923,7 @@
/en/supported-networks/polygon-amoy/
/en/supported-networks/polygon-zkevm-cardona/
/en/supported-networks/polygon-zkevm/
+/en/supported-networks/ronin/
/en/supported-networks/rootstock-testnet/
/en/supported-networks/rootstock/
/en/supported-networks/scroll-sepolia/
@@ -912,10 +940,13 @@
/en/supported-networks/sonic/
/en/supported-networks/starknet-mainnet/
/en/supported-networks/starknet-testnet/
+/en/supported-networks/stellar-testnet/
+/en/supported-networks/stellar/
/en/supported-networks/swellchain-sepolia/
/en/supported-networks/swellchain/
/en/supported-networks/telos-testnet/
/en/supported-networks/telos/
+/en/supported-networks/ultra/
/en/supported-networks/unichain-testnet/
/en/supported-networks/unichain/
/en/supported-networks/vana-moksha/
@@ -936,6 +967,7 @@
/en/token-api/evm/get-ohlc-prices-evm-by-contract/
/en/token-api/evm/get-tokens-evm-by-contract/
/en/token-api/evm/get-transfers-evm-by-address/
+/en/token-api/faq/
/en/token-api/mcp/claude/
/en/token-api/mcp/cline/
/en/token-api/mcp/cursor/
@@ -943,7 +975,6 @@
/en/token-api/monitoring/get-networks/
/en/token-api/monitoring/get-version/
/en/token-api/quick-start/
-/en/token-api/token-api-faq/
/es/
/es/404/
/es/about/
@@ -1007,6 +1038,7 @@
/es/subgraphs/guides/near/
/es/subgraphs/guides/polymarket/
/es/subgraphs/guides/secure-api-keys-nextjs/
+/es/subgraphs/guides/subgraph-composition/
/es/subgraphs/guides/subgraph-debug-forking/
/es/subgraphs/guides/subgraph-uncrashable/
/es/subgraphs/guides/transfer-to-the-graph/
@@ -1049,6 +1081,7 @@
/es/supported-networks/blast-mainnet/
/es/supported-networks/blast-testnet/
/es/supported-networks/bnb-op/
+/es/supported-networks/bnb-svm/
/es/supported-networks/boba-bnb-testnet/
/es/supported-networks/boba-bnb/
/es/supported-networks/boba-testnet/
@@ -1103,6 +1136,7 @@
/es/supported-networks/kaia/
/es/supported-networks/kylin/
/es/supported-networks/lens-testnet/
+/es/supported-networks/lens/
/es/supported-networks/linea-sepolia/
/es/supported-networks/linea/
/es/supported-networks/litecoin/
@@ -1133,6 +1167,7 @@
/es/supported-networks/polygon-amoy/
/es/supported-networks/polygon-zkevm-cardona/
/es/supported-networks/polygon-zkevm/
+/es/supported-networks/ronin/
/es/supported-networks/rootstock-testnet/
/es/supported-networks/rootstock/
/es/supported-networks/scroll-sepolia/
@@ -1149,10 +1184,13 @@
/es/supported-networks/sonic/
/es/supported-networks/starknet-mainnet/
/es/supported-networks/starknet-testnet/
+/es/supported-networks/stellar-testnet/
+/es/supported-networks/stellar/
/es/supported-networks/swellchain-sepolia/
/es/supported-networks/swellchain/
/es/supported-networks/telos-testnet/
/es/supported-networks/telos/
+/es/supported-networks/ultra/
/es/supported-networks/unichain-testnet/
/es/supported-networks/unichain/
/es/supported-networks/vana-moksha/
@@ -1173,6 +1211,7 @@
/es/token-api/evm/get-ohlc-prices-evm-by-contract/
/es/token-api/evm/get-tokens-evm-by-contract/
/es/token-api/evm/get-transfers-evm-by-address/
+/es/token-api/faq/
/es/token-api/mcp/claude/
/es/token-api/mcp/cline/
/es/token-api/mcp/cursor/
@@ -1243,6 +1282,7 @@
/fr/subgraphs/guides/near/
/fr/subgraphs/guides/polymarket/
/fr/subgraphs/guides/secure-api-keys-nextjs/
+/fr/subgraphs/guides/subgraph-composition/
/fr/subgraphs/guides/subgraph-debug-forking/
/fr/subgraphs/guides/subgraph-uncrashable/
/fr/subgraphs/guides/transfer-to-the-graph/
@@ -1285,6 +1325,7 @@
/fr/supported-networks/blast-mainnet/
/fr/supported-networks/blast-testnet/
/fr/supported-networks/bnb-op/
+/fr/supported-networks/bnb-svm/
/fr/supported-networks/boba-bnb-testnet/
/fr/supported-networks/boba-bnb/
/fr/supported-networks/boba-testnet/
@@ -1339,6 +1380,7 @@
/fr/supported-networks/kaia/
/fr/supported-networks/kylin/
/fr/supported-networks/lens-testnet/
+/fr/supported-networks/lens/
/fr/supported-networks/linea-sepolia/
/fr/supported-networks/linea/
/fr/supported-networks/litecoin/
@@ -1369,6 +1411,7 @@
/fr/supported-networks/polygon-amoy/
/fr/supported-networks/polygon-zkevm-cardona/
/fr/supported-networks/polygon-zkevm/
+/fr/supported-networks/ronin/
/fr/supported-networks/rootstock-testnet/
/fr/supported-networks/rootstock/
/fr/supported-networks/scroll-sepolia/
@@ -1385,10 +1428,13 @@
/fr/supported-networks/sonic/
/fr/supported-networks/starknet-mainnet/
/fr/supported-networks/starknet-testnet/
+/fr/supported-networks/stellar-testnet/
+/fr/supported-networks/stellar/
/fr/supported-networks/swellchain-sepolia/
/fr/supported-networks/swellchain/
/fr/supported-networks/telos-testnet/
/fr/supported-networks/telos/
+/fr/supported-networks/ultra/
/fr/supported-networks/unichain-testnet/
/fr/supported-networks/unichain/
/fr/supported-networks/vana-moksha/
@@ -1409,6 +1455,7 @@
/fr/token-api/evm/get-ohlc-prices-evm-by-contract/
/fr/token-api/evm/get-tokens-evm-by-contract/
/fr/token-api/evm/get-transfers-evm-by-address/
+/fr/token-api/faq/
/fr/token-api/mcp/claude/
/fr/token-api/mcp/cline/
/fr/token-api/mcp/cursor/
@@ -1479,6 +1526,7 @@
/hi/subgraphs/guides/near/
/hi/subgraphs/guides/polymarket/
/hi/subgraphs/guides/secure-api-keys-nextjs/
+/hi/subgraphs/guides/subgraph-composition/
/hi/subgraphs/guides/subgraph-debug-forking/
/hi/subgraphs/guides/subgraph-uncrashable/
/hi/subgraphs/guides/transfer-to-the-graph/
@@ -1521,6 +1569,7 @@
/hi/supported-networks/blast-mainnet/
/hi/supported-networks/blast-testnet/
/hi/supported-networks/bnb-op/
+/hi/supported-networks/bnb-svm/
/hi/supported-networks/boba-bnb-testnet/
/hi/supported-networks/boba-bnb/
/hi/supported-networks/boba-testnet/
@@ -1575,6 +1624,7 @@
/hi/supported-networks/kaia/
/hi/supported-networks/kylin/
/hi/supported-networks/lens-testnet/
+/hi/supported-networks/lens/
/hi/supported-networks/linea-sepolia/
/hi/supported-networks/linea/
/hi/supported-networks/litecoin/
@@ -1605,6 +1655,7 @@
/hi/supported-networks/polygon-amoy/
/hi/supported-networks/polygon-zkevm-cardona/
/hi/supported-networks/polygon-zkevm/
+/hi/supported-networks/ronin/
/hi/supported-networks/rootstock-testnet/
/hi/supported-networks/rootstock/
/hi/supported-networks/scroll-sepolia/
@@ -1621,10 +1672,13 @@
/hi/supported-networks/sonic/
/hi/supported-networks/starknet-mainnet/
/hi/supported-networks/starknet-testnet/
+/hi/supported-networks/stellar-testnet/
+/hi/supported-networks/stellar/
/hi/supported-networks/swellchain-sepolia/
/hi/supported-networks/swellchain/
/hi/supported-networks/telos-testnet/
/hi/supported-networks/telos/
+/hi/supported-networks/ultra/
/hi/supported-networks/unichain-testnet/
/hi/supported-networks/unichain/
/hi/supported-networks/vana-moksha/
@@ -1645,6 +1699,7 @@
/hi/token-api/evm/get-ohlc-prices-evm-by-contract/
/hi/token-api/evm/get-tokens-evm-by-contract/
/hi/token-api/evm/get-transfers-evm-by-address/
+/hi/token-api/faq/
/hi/token-api/mcp/claude/
/hi/token-api/mcp/cline/
/hi/token-api/mcp/cursor/
@@ -1715,6 +1770,7 @@
/it/subgraphs/guides/near/
/it/subgraphs/guides/polymarket/
/it/subgraphs/guides/secure-api-keys-nextjs/
+/it/subgraphs/guides/subgraph-composition/
/it/subgraphs/guides/subgraph-debug-forking/
/it/subgraphs/guides/subgraph-uncrashable/
/it/subgraphs/guides/transfer-to-the-graph/
@@ -1757,6 +1813,7 @@
/it/supported-networks/blast-mainnet/
/it/supported-networks/blast-testnet/
/it/supported-networks/bnb-op/
+/it/supported-networks/bnb-svm/
/it/supported-networks/boba-bnb-testnet/
/it/supported-networks/boba-bnb/
/it/supported-networks/boba-testnet/
@@ -1811,6 +1868,7 @@
/it/supported-networks/kaia/
/it/supported-networks/kylin/
/it/supported-networks/lens-testnet/
+/it/supported-networks/lens/
/it/supported-networks/linea-sepolia/
/it/supported-networks/linea/
/it/supported-networks/litecoin/
@@ -1841,6 +1899,7 @@
/it/supported-networks/polygon-amoy/
/it/supported-networks/polygon-zkevm-cardona/
/it/supported-networks/polygon-zkevm/
+/it/supported-networks/ronin/
/it/supported-networks/rootstock-testnet/
/it/supported-networks/rootstock/
/it/supported-networks/scroll-sepolia/
@@ -1857,10 +1916,13 @@
/it/supported-networks/sonic/
/it/supported-networks/starknet-mainnet/
/it/supported-networks/starknet-testnet/
+/it/supported-networks/stellar-testnet/
+/it/supported-networks/stellar/
/it/supported-networks/swellchain-sepolia/
/it/supported-networks/swellchain/
/it/supported-networks/telos-testnet/
/it/supported-networks/telos/
+/it/supported-networks/ultra/
/it/supported-networks/unichain-testnet/
/it/supported-networks/unichain/
/it/supported-networks/vana-moksha/
@@ -1881,6 +1943,7 @@
/it/token-api/evm/get-ohlc-prices-evm-by-contract/
/it/token-api/evm/get-tokens-evm-by-contract/
/it/token-api/evm/get-transfers-evm-by-address/
+/it/token-api/faq/
/it/token-api/mcp/claude/
/it/token-api/mcp/cline/
/it/token-api/mcp/cursor/
@@ -1951,6 +2014,7 @@
/ja/subgraphs/guides/near/
/ja/subgraphs/guides/polymarket/
/ja/subgraphs/guides/secure-api-keys-nextjs/
+/ja/subgraphs/guides/subgraph-composition/
/ja/subgraphs/guides/subgraph-debug-forking/
/ja/subgraphs/guides/subgraph-uncrashable/
/ja/subgraphs/guides/transfer-to-the-graph/
@@ -1993,6 +2057,7 @@
/ja/supported-networks/blast-mainnet/
/ja/supported-networks/blast-testnet/
/ja/supported-networks/bnb-op/
+/ja/supported-networks/bnb-svm/
/ja/supported-networks/boba-bnb-testnet/
/ja/supported-networks/boba-bnb/
/ja/supported-networks/boba-testnet/
@@ -2047,6 +2112,7 @@
/ja/supported-networks/kaia/
/ja/supported-networks/kylin/
/ja/supported-networks/lens-testnet/
+/ja/supported-networks/lens/
/ja/supported-networks/linea-sepolia/
/ja/supported-networks/linea/
/ja/supported-networks/litecoin/
@@ -2077,6 +2143,7 @@
/ja/supported-networks/polygon-amoy/
/ja/supported-networks/polygon-zkevm-cardona/
/ja/supported-networks/polygon-zkevm/
+/ja/supported-networks/ronin/
/ja/supported-networks/rootstock-testnet/
/ja/supported-networks/rootstock/
/ja/supported-networks/scroll-sepolia/
@@ -2093,10 +2160,13 @@
/ja/supported-networks/sonic/
/ja/supported-networks/starknet-mainnet/
/ja/supported-networks/starknet-testnet/
+/ja/supported-networks/stellar-testnet/
+/ja/supported-networks/stellar/
/ja/supported-networks/swellchain-sepolia/
/ja/supported-networks/swellchain/
/ja/supported-networks/telos-testnet/
/ja/supported-networks/telos/
+/ja/supported-networks/ultra/
/ja/supported-networks/unichain-testnet/
/ja/supported-networks/unichain/
/ja/supported-networks/vana-moksha/
@@ -2117,6 +2187,7 @@
/ja/token-api/evm/get-ohlc-prices-evm-by-contract/
/ja/token-api/evm/get-tokens-evm-by-contract/
/ja/token-api/evm/get-transfers-evm-by-address/
+/ja/token-api/faq/
/ja/token-api/mcp/claude/
/ja/token-api/mcp/cline/
/ja/token-api/mcp/cursor/
@@ -2185,6 +2256,7 @@
/ko/subgraphs/guides/near/
/ko/subgraphs/guides/polymarket/
/ko/subgraphs/guides/secure-api-keys-nextjs/
+/ko/subgraphs/guides/subgraph-composition/
/ko/subgraphs/guides/subgraph-debug-forking/
/ko/subgraphs/guides/subgraph-uncrashable/
/ko/subgraphs/guides/transfer-to-the-graph/
@@ -2213,6 +2285,7 @@
/ko/token-api/evm/get-ohlc-prices-evm-by-contract/
/ko/token-api/evm/get-tokens-evm-by-contract/
/ko/token-api/evm/get-transfers-evm-by-address/
+/ko/token-api/faq/
/ko/token-api/mcp/claude/
/ko/token-api/mcp/cline/
/ko/token-api/mcp/cursor/
@@ -2283,6 +2356,7 @@
/mr/subgraphs/guides/near/
/mr/subgraphs/guides/polymarket/
/mr/subgraphs/guides/secure-api-keys-nextjs/
+/mr/subgraphs/guides/subgraph-composition/
/mr/subgraphs/guides/subgraph-debug-forking/
/mr/subgraphs/guides/subgraph-uncrashable/
/mr/subgraphs/guides/transfer-to-the-graph/
@@ -2325,6 +2399,7 @@
/mr/supported-networks/blast-mainnet/
/mr/supported-networks/blast-testnet/
/mr/supported-networks/bnb-op/
+/mr/supported-networks/bnb-svm/
/mr/supported-networks/boba-bnb-testnet/
/mr/supported-networks/boba-bnb/
/mr/supported-networks/boba-testnet/
@@ -2379,6 +2454,7 @@
/mr/supported-networks/kaia/
/mr/supported-networks/kylin/
/mr/supported-networks/lens-testnet/
+/mr/supported-networks/lens/
/mr/supported-networks/linea-sepolia/
/mr/supported-networks/linea/
/mr/supported-networks/litecoin/
@@ -2409,6 +2485,7 @@
/mr/supported-networks/polygon-amoy/
/mr/supported-networks/polygon-zkevm-cardona/
/mr/supported-networks/polygon-zkevm/
+/mr/supported-networks/ronin/
/mr/supported-networks/rootstock-testnet/
/mr/supported-networks/rootstock/
/mr/supported-networks/scroll-sepolia/
@@ -2425,10 +2502,13 @@
/mr/supported-networks/sonic/
/mr/supported-networks/starknet-mainnet/
/mr/supported-networks/starknet-testnet/
+/mr/supported-networks/stellar-testnet/
+/mr/supported-networks/stellar/
/mr/supported-networks/swellchain-sepolia/
/mr/supported-networks/swellchain/
/mr/supported-networks/telos-testnet/
/mr/supported-networks/telos/
+/mr/supported-networks/ultra/
/mr/supported-networks/unichain-testnet/
/mr/supported-networks/unichain/
/mr/supported-networks/vana-moksha/
@@ -2449,6 +2529,7 @@
/mr/token-api/evm/get-ohlc-prices-evm-by-contract/
/mr/token-api/evm/get-tokens-evm-by-contract/
/mr/token-api/evm/get-transfers-evm-by-address/
+/mr/token-api/faq/
/mr/token-api/mcp/claude/
/mr/token-api/mcp/cline/
/mr/token-api/mcp/cursor/
@@ -2517,6 +2598,7 @@
/nl/subgraphs/guides/near/
/nl/subgraphs/guides/polymarket/
/nl/subgraphs/guides/secure-api-keys-nextjs/
+/nl/subgraphs/guides/subgraph-composition/
/nl/subgraphs/guides/subgraph-debug-forking/
/nl/subgraphs/guides/subgraph-uncrashable/
/nl/subgraphs/guides/transfer-to-the-graph/
@@ -2545,6 +2627,7 @@
/nl/token-api/evm/get-ohlc-prices-evm-by-contract/
/nl/token-api/evm/get-tokens-evm-by-contract/
/nl/token-api/evm/get-transfers-evm-by-address/
+/nl/token-api/faq/
/nl/token-api/mcp/claude/
/nl/token-api/mcp/cline/
/nl/token-api/mcp/cursor/
@@ -2613,6 +2696,7 @@
/pl/subgraphs/guides/near/
/pl/subgraphs/guides/polymarket/
/pl/subgraphs/guides/secure-api-keys-nextjs/
+/pl/subgraphs/guides/subgraph-composition/
/pl/subgraphs/guides/subgraph-debug-forking/
/pl/subgraphs/guides/subgraph-uncrashable/
/pl/subgraphs/guides/transfer-to-the-graph/
@@ -2641,6 +2725,7 @@
/pl/token-api/evm/get-ohlc-prices-evm-by-contract/
/pl/token-api/evm/get-tokens-evm-by-contract/
/pl/token-api/evm/get-transfers-evm-by-address/
+/pl/token-api/faq/
/pl/token-api/mcp/claude/
/pl/token-api/mcp/cline/
/pl/token-api/mcp/cursor/
@@ -2711,6 +2796,7 @@
/pt/subgraphs/guides/near/
/pt/subgraphs/guides/polymarket/
/pt/subgraphs/guides/secure-api-keys-nextjs/
+/pt/subgraphs/guides/subgraph-composition/
/pt/subgraphs/guides/subgraph-debug-forking/
/pt/subgraphs/guides/subgraph-uncrashable/
/pt/subgraphs/guides/transfer-to-the-graph/
@@ -2753,6 +2839,7 @@
/pt/supported-networks/blast-mainnet/
/pt/supported-networks/blast-testnet/
/pt/supported-networks/bnb-op/
+/pt/supported-networks/bnb-svm/
/pt/supported-networks/boba-bnb-testnet/
/pt/supported-networks/boba-bnb/
/pt/supported-networks/boba-testnet/
@@ -2807,6 +2894,7 @@
/pt/supported-networks/kaia/
/pt/supported-networks/kylin/
/pt/supported-networks/lens-testnet/
+/pt/supported-networks/lens/
/pt/supported-networks/linea-sepolia/
/pt/supported-networks/linea/
/pt/supported-networks/litecoin/
@@ -2837,6 +2925,7 @@
/pt/supported-networks/polygon-amoy/
/pt/supported-networks/polygon-zkevm-cardona/
/pt/supported-networks/polygon-zkevm/
+/pt/supported-networks/ronin/
/pt/supported-networks/rootstock-testnet/
/pt/supported-networks/rootstock/
/pt/supported-networks/scroll-sepolia/
@@ -2853,10 +2942,13 @@
/pt/supported-networks/sonic/
/pt/supported-networks/starknet-mainnet/
/pt/supported-networks/starknet-testnet/
+/pt/supported-networks/stellar-testnet/
+/pt/supported-networks/stellar/
/pt/supported-networks/swellchain-sepolia/
/pt/supported-networks/swellchain/
/pt/supported-networks/telos-testnet/
/pt/supported-networks/telos/
+/pt/supported-networks/ultra/
/pt/supported-networks/unichain-testnet/
/pt/supported-networks/unichain/
/pt/supported-networks/vana-moksha/
@@ -2877,6 +2969,7 @@
/pt/token-api/evm/get-ohlc-prices-evm-by-contract/
/pt/token-api/evm/get-tokens-evm-by-contract/
/pt/token-api/evm/get-transfers-evm-by-address/
+/pt/token-api/faq/
/pt/token-api/mcp/claude/
/pt/token-api/mcp/cline/
/pt/token-api/mcp/cursor/
@@ -2945,6 +3038,7 @@
/ro/subgraphs/guides/near/
/ro/subgraphs/guides/polymarket/
/ro/subgraphs/guides/secure-api-keys-nextjs/
+/ro/subgraphs/guides/subgraph-composition/
/ro/subgraphs/guides/subgraph-debug-forking/
/ro/subgraphs/guides/subgraph-uncrashable/
/ro/subgraphs/guides/transfer-to-the-graph/
@@ -2973,6 +3067,7 @@
/ro/token-api/evm/get-ohlc-prices-evm-by-contract/
/ro/token-api/evm/get-tokens-evm-by-contract/
/ro/token-api/evm/get-transfers-evm-by-address/
+/ro/token-api/faq/
/ro/token-api/mcp/claude/
/ro/token-api/mcp/cline/
/ro/token-api/mcp/cursor/
@@ -3043,6 +3138,7 @@
/ru/subgraphs/guides/near/
/ru/subgraphs/guides/polymarket/
/ru/subgraphs/guides/secure-api-keys-nextjs/
+/ru/subgraphs/guides/subgraph-composition/
/ru/subgraphs/guides/subgraph-debug-forking/
/ru/subgraphs/guides/subgraph-uncrashable/
/ru/subgraphs/guides/transfer-to-the-graph/
@@ -3085,6 +3181,7 @@
/ru/supported-networks/blast-mainnet/
/ru/supported-networks/blast-testnet/
/ru/supported-networks/bnb-op/
+/ru/supported-networks/bnb-svm/
/ru/supported-networks/boba-bnb-testnet/
/ru/supported-networks/boba-bnb/
/ru/supported-networks/boba-testnet/
@@ -3139,6 +3236,7 @@
/ru/supported-networks/kaia/
/ru/supported-networks/kylin/
/ru/supported-networks/lens-testnet/
+/ru/supported-networks/lens/
/ru/supported-networks/linea-sepolia/
/ru/supported-networks/linea/
/ru/supported-networks/litecoin/
@@ -3169,6 +3267,7 @@
/ru/supported-networks/polygon-amoy/
/ru/supported-networks/polygon-zkevm-cardona/
/ru/supported-networks/polygon-zkevm/
+/ru/supported-networks/ronin/
/ru/supported-networks/rootstock-testnet/
/ru/supported-networks/rootstock/
/ru/supported-networks/scroll-sepolia/
@@ -3185,10 +3284,13 @@
/ru/supported-networks/sonic/
/ru/supported-networks/starknet-mainnet/
/ru/supported-networks/starknet-testnet/
+/ru/supported-networks/stellar-testnet/
+/ru/supported-networks/stellar/
/ru/supported-networks/swellchain-sepolia/
/ru/supported-networks/swellchain/
/ru/supported-networks/telos-testnet/
/ru/supported-networks/telos/
+/ru/supported-networks/ultra/
/ru/supported-networks/unichain-testnet/
/ru/supported-networks/unichain/
/ru/supported-networks/vana-moksha/
@@ -3209,6 +3311,7 @@
/ru/token-api/evm/get-ohlc-prices-evm-by-contract/
/ru/token-api/evm/get-tokens-evm-by-contract/
/ru/token-api/evm/get-transfers-evm-by-address/
+/ru/token-api/faq/
/ru/token-api/mcp/claude/
/ru/token-api/mcp/cline/
/ru/token-api/mcp/cursor/
@@ -3279,6 +3382,7 @@
/sv/subgraphs/guides/near/
/sv/subgraphs/guides/polymarket/
/sv/subgraphs/guides/secure-api-keys-nextjs/
+/sv/subgraphs/guides/subgraph-composition/
/sv/subgraphs/guides/subgraph-debug-forking/
/sv/subgraphs/guides/subgraph-uncrashable/
/sv/subgraphs/guides/transfer-to-the-graph/
@@ -3321,6 +3425,7 @@
/sv/supported-networks/blast-mainnet/
/sv/supported-networks/blast-testnet/
/sv/supported-networks/bnb-op/
+/sv/supported-networks/bnb-svm/
/sv/supported-networks/boba-bnb-testnet/
/sv/supported-networks/boba-bnb/
/sv/supported-networks/boba-testnet/
@@ -3375,6 +3480,7 @@
/sv/supported-networks/kaia/
/sv/supported-networks/kylin/
/sv/supported-networks/lens-testnet/
+/sv/supported-networks/lens/
/sv/supported-networks/linea-sepolia/
/sv/supported-networks/linea/
/sv/supported-networks/litecoin/
@@ -3405,6 +3511,7 @@
/sv/supported-networks/polygon-amoy/
/sv/supported-networks/polygon-zkevm-cardona/
/sv/supported-networks/polygon-zkevm/
+/sv/supported-networks/ronin/
/sv/supported-networks/rootstock-testnet/
/sv/supported-networks/rootstock/
/sv/supported-networks/scroll-sepolia/
@@ -3421,10 +3528,13 @@
/sv/supported-networks/sonic/
/sv/supported-networks/starknet-mainnet/
/sv/supported-networks/starknet-testnet/
+/sv/supported-networks/stellar-testnet/
+/sv/supported-networks/stellar/
/sv/supported-networks/swellchain-sepolia/
/sv/supported-networks/swellchain/
/sv/supported-networks/telos-testnet/
/sv/supported-networks/telos/
+/sv/supported-networks/ultra/
/sv/supported-networks/unichain-testnet/
/sv/supported-networks/unichain/
/sv/supported-networks/vana-moksha/
@@ -3445,6 +3555,7 @@
/sv/token-api/evm/get-ohlc-prices-evm-by-contract/
/sv/token-api/evm/get-tokens-evm-by-contract/
/sv/token-api/evm/get-transfers-evm-by-address/
+/sv/token-api/faq/
/sv/token-api/mcp/claude/
/sv/token-api/mcp/cline/
/sv/token-api/mcp/cursor/
@@ -3515,6 +3626,7 @@
/tr/subgraphs/guides/near/
/tr/subgraphs/guides/polymarket/
/tr/subgraphs/guides/secure-api-keys-nextjs/
+/tr/subgraphs/guides/subgraph-composition/
/tr/subgraphs/guides/subgraph-debug-forking/
/tr/subgraphs/guides/subgraph-uncrashable/
/tr/subgraphs/guides/transfer-to-the-graph/
@@ -3557,6 +3669,7 @@
/tr/supported-networks/blast-mainnet/
/tr/supported-networks/blast-testnet/
/tr/supported-networks/bnb-op/
+/tr/supported-networks/bnb-svm/
/tr/supported-networks/boba-bnb-testnet/
/tr/supported-networks/boba-bnb/
/tr/supported-networks/boba-testnet/
@@ -3611,6 +3724,7 @@
/tr/supported-networks/kaia/
/tr/supported-networks/kylin/
/tr/supported-networks/lens-testnet/
+/tr/supported-networks/lens/
/tr/supported-networks/linea-sepolia/
/tr/supported-networks/linea/
/tr/supported-networks/litecoin/
@@ -3641,6 +3755,7 @@
/tr/supported-networks/polygon-amoy/
/tr/supported-networks/polygon-zkevm-cardona/
/tr/supported-networks/polygon-zkevm/
+/tr/supported-networks/ronin/
/tr/supported-networks/rootstock-testnet/
/tr/supported-networks/rootstock/
/tr/supported-networks/scroll-sepolia/
@@ -3657,10 +3772,13 @@
/tr/supported-networks/sonic/
/tr/supported-networks/starknet-mainnet/
/tr/supported-networks/starknet-testnet/
+/tr/supported-networks/stellar-testnet/
+/tr/supported-networks/stellar/
/tr/supported-networks/swellchain-sepolia/
/tr/supported-networks/swellchain/
/tr/supported-networks/telos-testnet/
/tr/supported-networks/telos/
+/tr/supported-networks/ultra/
/tr/supported-networks/unichain-testnet/
/tr/supported-networks/unichain/
/tr/supported-networks/vana-moksha/
@@ -3681,6 +3799,7 @@
/tr/token-api/evm/get-ohlc-prices-evm-by-contract/
/tr/token-api/evm/get-tokens-evm-by-contract/
/tr/token-api/evm/get-transfers-evm-by-address/
+/tr/token-api/faq/
/tr/token-api/mcp/claude/
/tr/token-api/mcp/cline/
/tr/token-api/mcp/cursor/
@@ -3749,6 +3868,7 @@
/uk/subgraphs/guides/near/
/uk/subgraphs/guides/polymarket/
/uk/subgraphs/guides/secure-api-keys-nextjs/
+/uk/subgraphs/guides/subgraph-composition/
/uk/subgraphs/guides/subgraph-debug-forking/
/uk/subgraphs/guides/subgraph-uncrashable/
/uk/subgraphs/guides/transfer-to-the-graph/
@@ -3777,6 +3897,7 @@
/uk/token-api/evm/get-ohlc-prices-evm-by-contract/
/uk/token-api/evm/get-tokens-evm-by-contract/
/uk/token-api/evm/get-transfers-evm-by-address/
+/uk/token-api/faq/
/uk/token-api/mcp/claude/
/uk/token-api/mcp/cline/
/uk/token-api/mcp/cursor/
@@ -3847,6 +3968,7 @@
/ur/subgraphs/guides/near/
/ur/subgraphs/guides/polymarket/
/ur/subgraphs/guides/secure-api-keys-nextjs/
+/ur/subgraphs/guides/subgraph-composition/
/ur/subgraphs/guides/subgraph-debug-forking/
/ur/subgraphs/guides/subgraph-uncrashable/
/ur/subgraphs/guides/transfer-to-the-graph/
@@ -3889,6 +4011,7 @@
/ur/supported-networks/blast-mainnet/
/ur/supported-networks/blast-testnet/
/ur/supported-networks/bnb-op/
+/ur/supported-networks/bnb-svm/
/ur/supported-networks/boba-bnb-testnet/
/ur/supported-networks/boba-bnb/
/ur/supported-networks/boba-testnet/
@@ -3943,6 +4066,7 @@
/ur/supported-networks/kaia/
/ur/supported-networks/kylin/
/ur/supported-networks/lens-testnet/
+/ur/supported-networks/lens/
/ur/supported-networks/linea-sepolia/
/ur/supported-networks/linea/
/ur/supported-networks/litecoin/
@@ -3973,6 +4097,7 @@
/ur/supported-networks/polygon-amoy/
/ur/supported-networks/polygon-zkevm-cardona/
/ur/supported-networks/polygon-zkevm/
+/ur/supported-networks/ronin/
/ur/supported-networks/rootstock-testnet/
/ur/supported-networks/rootstock/
/ur/supported-networks/scroll-sepolia/
@@ -3989,10 +4114,13 @@
/ur/supported-networks/sonic/
/ur/supported-networks/starknet-mainnet/
/ur/supported-networks/starknet-testnet/
+/ur/supported-networks/stellar-testnet/
+/ur/supported-networks/stellar/
/ur/supported-networks/swellchain-sepolia/
/ur/supported-networks/swellchain/
/ur/supported-networks/telos-testnet/
/ur/supported-networks/telos/
+/ur/supported-networks/ultra/
/ur/supported-networks/unichain-testnet/
/ur/supported-networks/unichain/
/ur/supported-networks/vana-moksha/
@@ -4013,6 +4141,7 @@
/ur/token-api/evm/get-ohlc-prices-evm-by-contract/
/ur/token-api/evm/get-tokens-evm-by-contract/
/ur/token-api/evm/get-transfers-evm-by-address/
+/ur/token-api/faq/
/ur/token-api/mcp/claude/
/ur/token-api/mcp/cline/
/ur/token-api/mcp/cursor/
@@ -4081,6 +4210,7 @@
/vi/subgraphs/guides/near/
/vi/subgraphs/guides/polymarket/
/vi/subgraphs/guides/secure-api-keys-nextjs/
+/vi/subgraphs/guides/subgraph-composition/
/vi/subgraphs/guides/subgraph-debug-forking/
/vi/subgraphs/guides/subgraph-uncrashable/
/vi/subgraphs/guides/transfer-to-the-graph/
@@ -4109,6 +4239,7 @@
/vi/token-api/evm/get-ohlc-prices-evm-by-contract/
/vi/token-api/evm/get-tokens-evm-by-contract/
/vi/token-api/evm/get-transfers-evm-by-address/
+/vi/token-api/faq/
/vi/token-api/mcp/claude/
/vi/token-api/mcp/cline/
/vi/token-api/mcp/cursor/
@@ -4179,6 +4310,7 @@
/zh/subgraphs/guides/near/
/zh/subgraphs/guides/polymarket/
/zh/subgraphs/guides/secure-api-keys-nextjs/
+/zh/subgraphs/guides/subgraph-composition/
/zh/subgraphs/guides/subgraph-debug-forking/
/zh/subgraphs/guides/subgraph-uncrashable/
/zh/subgraphs/guides/transfer-to-the-graph/
@@ -4221,6 +4353,7 @@
/zh/supported-networks/blast-mainnet/
/zh/supported-networks/blast-testnet/
/zh/supported-networks/bnb-op/
+/zh/supported-networks/bnb-svm/
/zh/supported-networks/boba-bnb-testnet/
/zh/supported-networks/boba-bnb/
/zh/supported-networks/boba-testnet/
@@ -4275,6 +4408,7 @@
/zh/supported-networks/kaia/
/zh/supported-networks/kylin/
/zh/supported-networks/lens-testnet/
+/zh/supported-networks/lens/
/zh/supported-networks/linea-sepolia/
/zh/supported-networks/linea/
/zh/supported-networks/litecoin/
@@ -4305,6 +4439,7 @@
/zh/supported-networks/polygon-amoy/
/zh/supported-networks/polygon-zkevm-cardona/
/zh/supported-networks/polygon-zkevm/
+/zh/supported-networks/ronin/
/zh/supported-networks/rootstock-testnet/
/zh/supported-networks/rootstock/
/zh/supported-networks/scroll-sepolia/
@@ -4321,10 +4456,13 @@
/zh/supported-networks/sonic/
/zh/supported-networks/starknet-mainnet/
/zh/supported-networks/starknet-testnet/
+/zh/supported-networks/stellar-testnet/
+/zh/supported-networks/stellar/
/zh/supported-networks/swellchain-sepolia/
/zh/supported-networks/swellchain/
/zh/supported-networks/telos-testnet/
/zh/supported-networks/telos/
+/zh/supported-networks/ultra/
/zh/supported-networks/unichain-testnet/
/zh/supported-networks/unichain/
/zh/supported-networks/vana-moksha/
@@ -4345,6 +4483,7 @@
/zh/token-api/evm/get-ohlc-prices-evm-by-contract/
/zh/token-api/evm/get-tokens-evm-by-contract/
/zh/token-api/evm/get-transfers-evm-by-address/
+/zh/token-api/faq/
/zh/token-api/mcp/claude/
/zh/token-api/mcp/cline/
/zh/token-api/mcp/cursor/
diff --git a/website/src/pages/ar/about.mdx b/website/src/pages/ar/about.mdx
index 8005f34aef5f..93dbeb51f658 100644
--- a/website/src/pages/ar/about.mdx
+++ b/website/src/pages/ar/about.mdx
@@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block
## The Graph Provides a Solution
-The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API.
+The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API.
Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process.
### How The Graph Functions
-Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
+Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
#### Specifics
-- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph.
+- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph.
-- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
+- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
-- When creating a subgraph, you need to write a subgraph manifest.
+- When creating a Subgraph, you need to write a Subgraph manifest.
-- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph.
+- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph.
-The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions.
+The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions.

@@ -56,12 +56,12 @@ The diagram below provides more detailed information about the flow of data afte
1. A dapp adds data to Ethereum through a transaction on a smart contract.
2. العقد الذكي يصدر حدثا واحدا أو أكثر أثناء معالجة الإجراء.
-3. يقوم الـ Graph Node بمسح الـ Ethereum باستمرار بحثا عن الكتل الجديدة وبيانات الـ subgraph الخاص بك.
-4. يعثر الـ Graph Node على أحداث الـ Ethereum لـ subgraph الخاص بك في هذه الكتل ويقوم بتشغيل mapping handlers التي قدمتها. الـ mapping عبارة عن وحدة WASM والتي تقوم بإنشاء أو تحديث البيانات التي يخزنها Graph Node استجابة لأحداث الـ Ethereum.
+3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain.
+4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events.
5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats.
## الخطوات التالية
-The following sections provide a more in-depth look at subgraphs, their deployment and data querying.
+The following sections provide a more in-depth look at Subgraphs, their deployment and data querying.
-Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data.
+Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data.
diff --git a/website/src/pages/ar/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/ar/archived/arbitrum/arbitrum-faq.mdx
index 898175b05cad..e1dbbea03383 100644
--- a/website/src/pages/ar/archived/arbitrum/arbitrum-faq.mdx
+++ b/website/src/pages/ar/archived/arbitrum/arbitrum-faq.mdx
@@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from:
- Security inherited from Ethereum
-Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas.
+Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas.
The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion.
@@ -39,7 +39,7 @@ To take advantage of using The Graph on L2, use this dropdown switcher to toggle

-## As a subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now?
+## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now?
Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support.
@@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto
Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20).
-## Are existing subgraphs on Ethereum working?
+## Are existing Subgraphs on Ethereum working?
-All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly.
+All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly.
## Does GRT have a new smart contract deployed on Arbitrum?
diff --git a/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-faq.mdx
index 9c949027b41f..965c96f7355a 100644
--- a/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-faq.mdx
+++ b/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-faq.mdx
@@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con
The L2 Transfer Tools use Arbitrum’s native mechanism to send messages from L1 to L2. This mechanism is called a “retryable ticket” and is used by all native token bridges, including the Arbitrum GRT bridge. You can read more about retryable tickets in the [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging).
-When you transfer your assets (subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum).
+When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum).
-This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you.
+This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you.
### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly?
@@ -36,43 +36,43 @@ If you have the L1 transaction hash (which you can find by looking at the recent
## نقل الـ Subgraph (الرسم البياني الفرعي)
-### كيفكيف أقوم بتحويل الـ subgraph الخاص بي؟
+### How do I transfer my Subgraph?
-لنقل الـ subgraph الخاص بك ، ستحتاج إلى إكمال الخطوات التالية:
+To transfer your Subgraph, you will need to complete the following steps:
1. ابدأ التحويل على شبكة Ethereum mainnet
2. انتظر 20 دقيقة للتأكيد
-3. قم بتأكيد نقل الـ subgraph على Arbitrum \ \*
+3. Confirm Subgraph transfer on Arbitrum\*
-4. قم بإنهاء نشر الـ subgraph على Arbitrum
+4. Finish publishing Subgraph on Arbitrum
5. جدث عنوان URL للاستعلام (مستحسن)
-\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol).
+\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol).
### من أين يجب أن أبدأ التحويل ؟
-يمكنك بدء عملية النقل من [Subgraph Studio] (https://thegraph.com/studio/) ، [Explorer ،] (https://thegraph.com/explorer) أو من أي صفحة تفاصيل subgraph. انقر فوق الزر "Transfer Subgraph" في صفحة تفاصيل الرسم الـ subgraph لبدء النقل.
+You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer.
-### كم من الوقت سأنتظر حتى يتم نقل الـ subgraph الخاص بي
+### How long do I need to wait until my Subgraph is transferred
يستغرق وقت النقل حوالي 20 دقيقة. يعمل جسر Arbitrum في الخلفية لإكمال نقل الجسر تلقائيًا. في بعض الحالات ، قد ترتفع تكاليف الغاز وستحتاج إلى تأكيد المعاملة مرة أخرى.
-### هل سيظل الـ subgraph قابلاً للاكتشاف بعد أن أنقله إلى L2؟
+### Will my Subgraph still be discoverable after I transfer it to L2?
-سيكون الـ subgraph الخاص بك قابلاً للاكتشاف على الشبكة التي تم نشرها عليها فقط. على سبيل المثال ، إذا كان الـ subgraph الخاص بك موجودًا على Arbitrum One ، فيمكنك العثور عليه فقط في Explorer على Arbitrum One ولن تتمكن من العثور عليه على Ethereum. يرجى التأكد من تحديد Arbitrum One في مبدل الشبكة في أعلى الصفحة للتأكد من أنك على الشبكة الصحيحة. بعد النقل ، سيظهر الـ L1 subgraph على أنه مهمل.
+Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network. After the transfer, the L1 Subgraph will appear as deprecated.
-### هل يلزم نشر الـ subgraph الخاص بي لنقله؟
+### Does my Subgraph need to be published to transfer it?
-للاستفادة من أداة نقل الـ subgraph ، يجب أن يكون الرسم البياني الفرعي الخاص بك قد تم نشره بالفعل على شبكة Ethereum الرئيسية ويجب أن يكون لديه إشارة تنسيق مملوكة للمحفظة التي تمتلك الرسم البياني الفرعي. إذا لم يتم نشر الرسم البياني الفرعي الخاص بك ، فمن المستحسن أن تقوم ببساطة بالنشر مباشرة على Arbitrum One - ستكون رسوم الغاز أقل بكثير. إذا كنت تريد نقل رسم بياني فرعي منشور ولكن حساب المالك لا يملك إشارة تنسيق عليه ، فيمكنك الإشارة بمبلغ صغير (على سبيل المثال 1 GRT) من ذلك الحساب ؛ تأكد من اختيار إشارة "auto-migrating".
+To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal.
-### ماذا يحدث لإصدار Ethereum mainnet للرسم البياني الفرعي الخاص بي بعد أن النقل إلى Arbitrum؟
+### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum?
-بعد نقل الرسم البياني الفرعي الخاص بك إلى Arbitrum ، سيتم إهمال إصدار Ethereum mainnet. نوصي بتحديث عنوان URL للاستعلام في غضون 48 ساعة. ومع ذلك ، هناك فترة سماح تحافظ على عمل عنوان URL للشبكة الرئيسية الخاصة بك بحيث يمكن تحديث أي دعم dapp لجهة خارجية.
+After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated.
### بعد النقل ، هل أحتاج أيضًا إلى إعادة النشر على Arbitrum؟
@@ -80,21 +80,21 @@ If you have the L1 transaction hash (which you can find by looking at the recent
### Will my endpoint experience downtime while re-publishing?
-It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2.
+It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2.
### هل يتم نشر وتخطيط الإصدار بنفس الطريقة في الـ L2 كما هو الحال في شبكة Ethereum Ethereum mainnet؟
-Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph.
+Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph.
-### هل سينتقل تنسيق الـ subgraph مع الـ subgraph ؟
+### Will my Subgraph's curation move with my Subgraph?
-إذا اخترت إشارة الترحيل التلقائي auto-migrating ، فسيتم نقل 100٪ من التنسيق مع الرسم البياني الفرعي الخاص بك إلى Arbitrum One. سيتم تحويل كل إشارة التنسيق الخاصة بالرسم الفرعي إلى GRT في وقت النقل ، وسيتم استخدام GRT المقابل لإشارة التنسيق الخاصة بك لصك الإشارة على L2 subgraph.
+If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph.
-يمكن للمنسقين الآخرين اختيار ما إذا كانوا سيسحبون أجزاء من GRT ، أو ينقلونه أيضًا إلى L2 لإنتاج إشارة على نفس الرسم البياني الفرعي.
+Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph.
-### هل يمكنني إعادة الرسم البياني الفرعي الخاص بي إلى Ethereum mainnet بعد أن أقوم بالنقل؟
+### Can I move my Subgraph back to Ethereum mainnet after I transfer?
-بمجرد النقل ، سيتم إهمال إصدار شبكة Ethereum mainnet للرسم البياني الفرعي الخاص بك. إذا كنت ترغب في العودة إلى mainnet ، فستحتاج إلى إعادة النشر (redeploy) والنشر مرة أخرى على mainnet. ومع ذلك ، لا يُنصح بشدة بالتحويل مرة أخرى إلى شبكة Ethereum mainnet حيث سيتم في النهاية توزيع مكافآت الفهرسة بالكامل على Arbitrum One.
+Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One.
### لماذا أحتاج إلى Bridged ETH لإكمال النقل؟
@@ -206,19 +206,19 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans
\ \* إذا لزم الأمر -أنت تستخدم عنوان عقد.
-### كيف سأعرف ما إذا كان الرسم البياني الفرعي الذي قمت بعمل إشارة تنسيق عليه قد انتقل إلى L2؟
+### How will I know if the Subgraph I curated has moved to L2?
-عند عرض صفحة تفاصيل الرسم البياني الفرعي ، ستعلمك لافتة بأنه تم نقل هذا الرسم البياني الفرعي. يمكنك اتباع التعليمات لنقل إشارة التنسيق الخاص بك. يمكنك أيضًا العثور على هذه المعلومات في صفحة تفاصيل الرسم البياني الفرعي لأي رسم بياني فرعي تم نقله.
+When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved.
### ماذا لو كنت لا أرغب في نقل إشارة التنسيق الخاص بي إلى L2؟
-عندما يتم إهمال الرسم البياني الفرعي ، يكون لديك خيار سحب الإشارة. وبالمثل ، إذا انتقل الرسم البياني الفرعي إلى L2 ، فيمكنك اختيار سحب الإشارة في شبكة Ethereum الرئيسية أو إرسال الإشارة إلى L2.
+When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2.
### كيف أعرف أنه تم نقل إشارة التنسيق بنجاح؟
يمكن الوصول إلى تفاصيل الإشارة عبر Explorer بعد حوالي 20 دقيقة من بدء أداة النقل للـ L2.
-### هل يمكنني نقل إشاة التنسيق الخاص بي على أكثر من رسم بياني فرعي في وقت واحد؟
+### Can I transfer my curation on more than one Subgraph at a time?
لا يوجد خيار كهذا حالياً.
@@ -266,7 +266,7 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans
### هل يجب أن أقوم بالفهرسة على Arbitrum قبل أن أنقل حصتي؟
-يمكنك تحويل حصتك بشكل فعال أولاً قبل إعداد الفهرسة ، ولكن لن تتمكن من المطالبة بأي مكافآت على L2 حتى تقوم بتخصيصها لـ subgraphs على L2 وفهرستها وعرض POIs.
+You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs.
### هل يستطيع المفوضون نقل تفويضهم قبل نقل indexing stake الخاص بي؟
diff --git a/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-guide.mdx
index af5a133538d6..5863ff2de0a2 100644
--- a/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-guide.mdx
+++ b/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-guide.mdx
@@ -6,53 +6,53 @@ title: L2 Transfer Tools Guide
Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them.
-## كيف تنقل الغراف الفرعي الخاص بك إلى شبكة آربترم (الطبقة الثانية)
+## How to transfer your Subgraph to Arbitrum (L2)
-## فوائد نقل الغراف الفرعي الخاصة بك
+## Benefits of transferring your Subgraphs
مجتمع الغراف والمطورون الأساسيون كانوا [يستعدون] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) للإنتقال إلى آربترم على مدى العام الماضي. وتعتبر آربترم سلسلة كتل من الطبقة الثانية أو "L2"، حيث ترث الأمان من سلسلة الإيثيريوم ولكنها توفر رسوم غازٍ أقل بشكلٍ كبير.
-عندما تقوم بنشر أو ترقية الغرافات الفرعية الخاصة بك إلى شبكة الغراف، فأنت تتفاعل مع عقودٍ ذكيةٍ في البروتوكول وهذا يتطلب دفع رسوم الغاز باستخدام عملة الايثيريوم. من خلال نقل غرافاتك الفرعية إلى آربترم، فإن أي ترقيات مستقبلية لغرافك الفرعي ستتطلب رسوم غازٍ أقل بكثير. الرسوم الأقل، وكذلك حقيقة أن منحنيات الترابط التنسيقي على الطبقة الثانية مستقيمة، تجعل من الأسهل على المنسِّقين الآخرين تنسيق غرافك الفرعي، ممّا يزيد من مكافآت المفهرِسين على غرافك الفرعي. هذه البيئة ذات التكلفة-الأقل كذلك تجعل من الأرخص على المفهرسين أن يقوموا بفهرسة وخدمة غرافك الفرعي. سوف تزداد مكافآت الفهرسة على آربترم وتتناقص على شبكة إيثيريوم الرئيسية على مدى الأشهر المقبلة، لذلك سيقوم المزيد والمزيد من المُفَهرِسين بنقل ودائعهم المربوطة وتثبيت عملياتهم على الطبقة الثانية.
+When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2.
-## فهم ما يحدث مع الإشارة وغرافك الفرعي على الطبقة الأولى وعناوين مواقع الإستعلام
+## Understanding what happens with signal, your L1 Subgraph and query URLs
-عند نقل سبجراف إلى Arbitrum، يتم استخدام جسر Arbitrum GRT، الذي بدوره يستخدم جسر Arbitrum الأصلي، لإرسال السبجراف إلى L2. سيؤدي عملية "النقل" إلى إهمال السبجراف على شبكة الإيثيريوم الرئيسية وإرسال المعلومات لإعادة إنشاء السبجراف على L2 باستخدام الجسر. ستتضمن أيضًا رصيد GRT المرهون المرتبط بمالك السبجراف، والذي يجب أن يكون أكبر من الصفر حتى يقبل الجسر النقل.
+Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer.
-عندما تختار نقل الرسم البياني الفرعي ، سيؤدي ذلك إلى تحويل جميع إشارات التنسيق الخاصة بالرسم الفرعي إلى GRT. هذا يعادل "إهمال" الرسم البياني الفرعي على الشبكة الرئيسية. سيتم إرسال GRT المستخدمة لعملية التنسيق الخاصة بك إلى L2 جمباً إلى جمب مع الرسم البياني الفرعي ، حيث سيتم استخدامها لإنتاج الإشارة نيابة عنك.
+When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf.
-يمكن للمنسقين الآخرين اختيار ما إذا كانوا سيسحبون جزء من GRT الخاص بهم ، أو نقله أيضًا إلى L2 لصك إشارة على نفس الرسم البياني الفرعي. إذا لم يقم مالك الرسم البياني الفرعي بنقل الرسم البياني الفرعي الخاص به إلى L2 وقام بإيقافه يدويًا عبر استدعاء العقد ، فسيتم إخطار المنسقين وسيتمكنون من سحب تنسيقهم.
+Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation.
-بمجرد نقل الرسم البياني الفرعي ، لن يتلقى المفهرسون بعد الآن مكافآت لفهرسة الرسم البياني الفرعي، نظرًا لأنه يتم تحويل كل التنسيق لـ GRT. ومع ذلك ، سيكون هناك مفهرسون 1) سيستمرون في خدمة الرسوم البيانية الفرعية المنقولة لمدة 24 ساعة ، و 2) سيبدأون فورًا في فهرسة الرسم البياني الفرعي على L2. ونظرًا لأن هؤلاء المفهرسون لديهم بالفعل رسم بياني فرعي مفهرس ، فلا داعي لانتظار مزامنة الرسم البياني الفرعي ، وسيكون من الممكن الاستعلام عن الرسم البياني الفرعي على L2 مباشرة تقريبًا.
+As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately.
-يجب إجراء الاستعلامات على الرسم البياني الفرعي في L2 على عنوان URL مختلف (على \`` Arbitrum-gateway.thegraph.com`) ، لكن عنوان URL L1 سيستمر في العمل لمدة 48 ساعة على الأقل. بعد ذلك ، ستقوم بوابة L1 بإعادة توجيه الاستعلامات إلى بوابة L2 (لبعض الوقت) ، ولكن هذا سيضيف زمن تأخير لذلك يوصى تغيير جميع استعلاماتك إلى عنوان URL الجديد في أقرب وقت ممكن.
+Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible.
## اختيار محفظة L2 الخاصة بك
-عندما قمت بنشر subgraph الخاص بك على الشبكة الرئيسية ، فقد استخدمت محفظة متصلة لإنشاء subgraph ، وتمتلك هذه المحفظة NFT الذي يمثل هذا subgraph ويسمح لك بنشر التحديثات.
+When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates.
-عند نقل الرسم البياني الفرعي إلى Arbitrum ، يمكنك اختيار محفظة مختلفة والتي ستمتلك هذا الـ subgraph NFT على L2.
+When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2.
إذا كنت تستخدم محفظة "عادية" مثل MetaMask (حساب مملوك خارجيًا EOA ، محفظة ليست بعقد ذكي) ، فهذا اختياري ويوصى بالاحتفاظ بعنوان المالك نفسه كما في L1.
-إذا كنت تستخدم محفظة بعقد ذكي ، مثل multisig (على سبيل المثال Safe) ، فإن اختيار عنوان مختلف لمحفظة L2 أمر إلزامي ، حيث من المرجح أن هذا الحساب موجود فقط على mainnet ولن تكون قادرًا على إجراء المعاملات على Arbitrum باستخدام هذه المحفظة. إذا كنت ترغب في الاستمرار في استخدام محفظة عقد ذكية أو multisig ، فقم بإنشاء محفظة جديدة على Arbitrum واستخدم عنوانها كمالك للرسم البياني الفرعي الخاص بك على L2.
+If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph.
-** من المهم جدًا استخدام عنوان محفظة تتحكم فيه ، ويمكنه إجراء معاملات على Arbitrum. وإلا فسيتم فقد الرسم البياني الفرعي ولا يمكن استعادته. **
+**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.**
## التحضير لعملية النقل: إنشاء جسر لـبعض ETH
-يتضمن نقل الغراف الفرعي إرسال معاملة عبر الجسر ، ثم تنفيذ معاملة أخرى على شبكة أربترم. تستخدم المعاملة الأولى الإيثيريوم على الشبكة الرئيسية ، وتتضمن بعضًا من إيثيريوم لدفع ثمن الغاز عند استلام الرسالة على الطبقة الثانية. ومع ذلك ، إذا كان هذا الغاز غير كافٍ ، فسيتعين عليك إعادة إجراء المعاملة ودفع ثمن الغاز مباشرةً على الطبقة الثانية (هذه هي "الخطوة 3: تأكيد التحويل" أدناه). يجب تنفيذ هذه الخطوة ** في غضون 7 أيام من بدء التحويل **. علاوة على ذلك ، سيتم إجراء المعاملة الثانية مباشرة على شبكة أربترم ("الخطوة 4: إنهاء التحويل على الطبقة الثانية"). لهذه الأسباب ، ستحتاج بعضًا من إيثيريوم في محفظة أربترم. إذا كنت تستخدم متعدد التواقيع أو عقداً ذكياً ، فيجب أن يكون هناك بعضًا من إيثيريوم في المحفظة العادية (حساب مملوك خارجيا) التي تستخدمها لتنفيذ المعاملات ، وليس على محفظة متعددة التواقيع.
+Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself.
يمكنك شراء إيثيريوم من بعض المنصات وسحبها مباشرة إلى أربترم، أو يمكنك استخدام جسر أربترم لإرسال إيثيريوم من محفظة الشبكة الرئيسيةإلى الطبقة الثانية: [bridge.arbitrum.io] (http://bridge.arbitrum.io). نظرًا لأن رسوم الغاز على أربترم أقل ، فستحتاج فقط إلى مبلغ صغير. من المستحسن أن تبدأ بمبلغ منخفض (0 على سبيل المثال ، 01 ETH) للموافقة على معاملتك.
-## العثور على أداة نقل الغراف الفرعي
+## Finding the Subgraph Transfer Tool
-يمكنك العثور على أداة نقل L2 في صفحة الرسم البياني الفرعي الخاص بك على Subgraph Studio:
+You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio:

-إذا كنت متصلاً بالمحفظة التي تمتلك الغراف الفرعي، فيمكنك الوصول إليها عبر المستكشف، وذلك عن طريق الانتقال إلى صفحة الغراف الفرعي على المستكشف:
+It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer:

@@ -60,19 +60,19 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools
## الخطوة 1: بدء عملية النقل
-قبل بدء عملية النقل، يجب أن تقرر أي عنوان سيكون مالكًا للغراف الفرعي على الطبقة الثانية (انظر "اختيار محفظة الطبقة الثانية" أعلاه)، ويُوصَى بشدة بأن يكون لديك بعضًا من الإيثيريوم لرسوم الغاز على أربترم. يمكنك الاطلاع على (التحضير لعملية النقل: تحويل بعضًا من إيثيريوم عبر الجسر." أعلاه).
+Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above).
-يرجى أيضًا ملاحظة أن نقل الرسم البياني الفرعي يتطلب وجود كمية غير صفرية من إشارة التنسيق عليه بنفس الحساب الذي يمتلك الرسم البياني الفرعي ؛ إذا لم تكن قد أشرت إلى الرسم البياني الفرعي ، فسيتعين عليك إضافة القليل من إشارة التنسيق (يكفي إضافة مبلغ صغير مثل 1 GRT).
+Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice).
-بعد فتح أداة النقل، ستتمكن من إدخال عنوان المحفظة في الطبقة الثانية في حقل "عنوان محفظة الاستلام". تأكد من إدخال العنوان الصحيح هنا. بعد ذلك، انقر على "نقل الغراف الفرعي"، وسيتم طلب تنفيذ العملية في محفظتك. (يُرجى ملاحظة أنه يتم تضمين بعضًا من الإثيريوم لدفع رسوم الغاز في الطبقة الثانية). بعد تنفيذ العملية، سيتم بدء عملية النقل وإهمال الغراف الفرعي في الطبقة الأولى. (يمكنك الاطلاع على "فهم ما يحدث مع الإشارة والغراف الفرعي في الطبقة الأولى وعناوين الاستعلام" أعلاه لمزيد من التفاصيل حول ما يحدث خلف الكواليس).
+After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes).
-إذا قمت بتنفيذ هذه الخطوة، \*\*يجب عليك التأكد من أنك ستستكمل الخطوة 3 في غضون 7 أيام، وإلا فإنك ستفقد الغراف الفرعي والإشارة GRT الخاصة بك. يرجع ذلك إلى آلية التواصل بين الطبقة الأولى والطبقة الثانية في أربترم: الرسائل التي ترسل عبر الجسر هي "تذاكر قابلة لإعادة المحاولة" يجب تنفيذها في غضون 7 أيام، وقد يتطلب التنفيذ الأولي إعادة المحاولة إذا كان هناك زيادة في سعر الغاز على أربترم.
+If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum.

-## الخطوة 2: الانتظار حتى يتم نقل الغراف الفرعي إلى الطبقة الثانية
+## Step 2: Waiting for the Subgraph to get to L2
-بعد بدء عملية النقل، يتعين على الرسالة التي ترسل الـ subgraph من L1 إلى L2 أن يتم نشرها عبر جسر Arbitrum. يستغرق ذلك حوالي 20 دقيقة (ينتظر الجسر لكتلة الشبكة الرئيسية التي تحتوي على المعاملة حتى يتأكد أنها "آمنة" من إمكانية إعادة ترتيب السلسلة).
+After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs).
بمجرد انتهاء وقت الانتظار ، ستحاول Arbitrum تنفيذ النقل تلقائيًا على عقود L2.
@@ -80,7 +80,7 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools
## الخطوة الثالثة: تأكيد التحويل
-في معظم الحالات ، سيتم تنفيذ هذه الخطوة تلقائيًا لأن غاز الطبقة الثانية المضمن في الخطوة 1 يجب أن يكون كافيًا لتنفيذ المعاملة التي تتلقى الغراف الفرعي في عقود أربترم. ومع ذلك ، في بعض الحالات ، من الممكن أن يؤدي ارتفاع أسعار الغاز على أربترم إلى فشل هذا التنفيذ التلقائي. وفي هذه الحالة ، ستكون "التذكرة" التي ترسل غرافك الفرعي إلى الطبقة الثانية معلقة وتتطلب إعادة المحاولة في غضون 7 أيام.
+In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days.
في هذا الحالة ، فستحتاج إلى الاتصال باستخدام محفظة الطبقة الثانية والتي تحتوي بعضاً من إيثيريوم على أربترم، قم بتغيير شبكة محفظتك إلى أربترم، والنقر فوق "تأكيد النقل" لإعادة محاولة المعاملة.
@@ -88,33 +88,33 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools
## الخطوة 4: إنهاء عملية النقل على L2
-في هذه المرحلة، تم استلام الغراف الفرعي والـ GRT الخاص بك على أربترم، ولكن الغراف الفرعي لم يتم نشره بعد. ستحتاج إلى الربط باستخدام محفظة الطبقة الثانية التي اخترتها كمحفظة استلام، وتغيير شبكة محفظتك إلى أربترم، ثم النقر على "نشر الغراف الفرعي"
+At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph."
-
+
-
+
-سيؤدي هذا إلى نشر الغراف الفرعي حتى يتمكن المفهرسون الذين يعملون في أربترم بالبدء في تقديم الخدمة. كما أنه سيعمل أيضًا على إصدار إشارة التنسيق باستخدام GRT التي تم نقلها من الطبقة الأولى.
+This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1.
## Step 5: Updating the query URL
-تم نقل غرافك الفرعي بنجاح إلى أربترم! للاستعلام عن الغراف الفرعي ، سيكون عنوان URL الجديد هو:
+Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be :
`https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]`
-لاحظ أن ID الغراف الفرعي على أربترم سيكون مختلفًا عن الذي لديك في الشبكة الرئيسية، ولكن يمكنك العثور عليه في المستكشف أو استوديو. كما هو مذكور أعلاه (راجع "فهم ما يحدث للإشارة والغراف الفرعي في الطبقة الأولى وعناوين الاستعلام") سيتم دعم عنوان URL الطبقة الأولى القديم لفترة قصيرة ، ولكن يجب عليك تبديل استعلاماتك إلى العنوان الجديد بمجرد مزامنة الغراف الفرعي على الطبقة الثانية.
+Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2.
## كيفية نقل التنسيق الخاص بك إلى أربترم (الطبقة الثانية)
-## Understanding what happens to curation on subgraph transfers to L2
+## Understanding what happens to curation on Subgraph transfers to L2
-When the owner of a subgraph transfers a subgraph to Arbitrum, all of the subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a subgraph version or deployment but that follows the latest version of a subgraph.
+When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph.
-This conversion from signal to GRT is the same as what would happen if the subgraph owner deprecated the subgraph in L1. When the subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles subgraph upgrades and auto-migrated signal). Each Curator on that subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the subgraph.
+This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph.
-A fraction of these GRT corresponding to the subgraph owner is sent to L2 together with the subgraph.
+A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph.
-At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it.
+At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it.
## اختيار محفظة L2 الخاصة بك
@@ -130,9 +130,9 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho
Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough.
-If a subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred subgraph.
+If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph.
-When looking at the subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool.
+When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool.

@@ -162,4 +162,4 @@ In most cases, this step will auto-execute as the L2 gas included in step 1 shou
## Withdrawing your curation on L1
-If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address.
+If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address.
diff --git a/website/src/pages/ar/archived/sunrise.mdx b/website/src/pages/ar/archived/sunrise.mdx
index eb18a93c506c..71262f22e7d8 100644
--- a/website/src/pages/ar/archived/sunrise.mdx
+++ b/website/src/pages/ar/archived/sunrise.mdx
@@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ
## What was the Sunrise of Decentralized Data?
-The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly.
+The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly.
-This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs.
+This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs.
### What happened to the hosted service?
-The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service.
+The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service.
-During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs.
+During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs.
### Was Subgraph Studio impacted by this upgrade?
No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service.
-### Why were subgraphs published to Arbitrum, did it start indexing a different network?
+### Why were Subgraphs published to Arbitrum, did it start indexing a different network?
-The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/)
+The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/)
## About the Upgrade Indexer
> The upgrade Indexer is currently active.
-The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed.
+The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed.
### What does the upgrade Indexer do?
-- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published.
+- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published.
- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/).
-- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them.
+- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them.
### Why is Edge & Node running the upgrade Indexer?
-Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs.
+Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs.
### What does the upgrade indexer mean for existing Indexers?
Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first.
-However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain.
+However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain.
-The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network.
+The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network.
### What does this mean for Delegators?
-The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity.
+The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity.
### Did the upgrade Indexer compete with existing Indexers for rewards?
-No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards.
+No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards.
-It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs.
+It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs.
-### How does this affect subgraph developers?
+### How does this affect Subgraph developers?
-Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade.
+Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade.
### How does the upgrade Indexer benefit data consumers?
@@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp
The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market.
-### When will the upgrade Indexer stop supporting a subgraph?
+### When will the upgrade Indexer stop supporting a Subgraph?
-The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it.
+The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it.
-Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days.
+Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days.
-Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it.
+Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it.
diff --git a/website/src/pages/ar/global.json b/website/src/pages/ar/global.json
index b543fd624f0e..d9110259f5cb 100644
--- a/website/src/pages/ar/global.json
+++ b/website/src/pages/ar/global.json
@@ -6,6 +6,7 @@
"subgraphs": "Subgraphs",
"substreams": "متعدد-السلاسل",
"sps": "Substreams-Powered Subgraphs",
+ "tokenApi": "Token API",
"indexing": "Indexing",
"resources": "Resources",
"archived": "Archived"
@@ -24,9 +25,51 @@
"linkToThisSection": "Link to this section"
},
"content": {
- "note": "Note",
+ "callout": {
+ "note": "Note",
+ "tip": "Tip",
+ "important": "Important",
+ "warning": "Warning",
+ "caution": "Caution"
+ },
"video": "Video"
},
+ "openApi": {
+ "parameters": {
+ "pathParameters": "Path Parameters",
+ "queryParameters": "Query Parameters",
+ "headerParameters": "Header Parameters",
+ "cookieParameters": "Cookie Parameters",
+ "parameter": "Parameter",
+ "description": "الوصف",
+ "value": "Value",
+ "required": "Required",
+ "deprecated": "Deprecated",
+ "defaultValue": "Default value",
+ "minimumValue": "Minimum value",
+ "maximumValue": "Maximum value",
+ "acceptedValues": "Accepted values",
+ "acceptedPattern": "Accepted pattern",
+ "format": "Format",
+ "serializationFormat": "Serialization format"
+ },
+ "request": {
+ "label": "Test this endpoint",
+ "noCredentialsRequired": "No credentials required",
+ "send": "Send Request"
+ },
+ "responses": {
+ "potentialResponses": "Potential Responses",
+ "status": "Status",
+ "description": "الوصف",
+ "liveResponse": "Live Response",
+ "example": "Example"
+ },
+ "errors": {
+ "invalidApi": "Could not retrieve API {0}.",
+ "invalidOperation": "Could not retrieve operation {0} in API {1}."
+ }
+ },
"notFound": {
"title": "Oops! This page was lost in space...",
"subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.",
diff --git a/website/src/pages/ar/index.json b/website/src/pages/ar/index.json
index c53846a9d8fa..2443372843a8 100644
--- a/website/src/pages/ar/index.json
+++ b/website/src/pages/ar/index.json
@@ -7,7 +7,7 @@
"cta2": "Build your first subgraph"
},
"products": {
- "title": "The Graph’s Products",
+ "title": "The Graph's Products",
"description": "Choose a solution that fits your needs—interact with blockchain data your way.",
"subgraphs": {
"title": "Subgraphs",
@@ -21,7 +21,7 @@
},
"sps": {
"title": "Substreams-Powered Subgraphs",
- "description": "Boost your subgraph’s efficiency and scalability by using Substreams.",
+ "description": "Boost your subgraph's efficiency and scalability by using Substreams.",
"cta": "Set up a Substreams-powered subgraph"
},
"graphNode": {
@@ -39,12 +39,12 @@
"title": "الشبكات المدعومة",
"details": "Network Details",
"services": "Services",
- "type": "Type",
+ "type": "النوع",
"protocol": "Protocol",
"identifier": "Identifier",
"chainId": "Chain ID",
"nativeCurrency": "Native Currency",
- "docs": "Docs",
+ "docs": "التوثيق",
"shortName": "Short Name",
"guides": "Guides",
"search": "Search networks",
@@ -68,7 +68,7 @@
"name": "Name",
"id": "ID",
"subgraphs": "Subgraphs",
- "substreams": "Substreams",
+ "substreams": "متعدد-السلاسل",
"firehose": "Firehose",
"tokenapi": "Token API"
}
@@ -80,7 +80,7 @@
"description": "Kickstart your journey into subgraph development."
},
"substreams": {
- "title": "Substreams",
+ "title": "متعدد-السلاسل",
"description": "Stream high-speed data for real-time indexing."
},
"timeseries": {
@@ -92,7 +92,7 @@
"description": "Leverage features like custom data sources, event handlers, and topic filters."
},
"billing": {
- "title": "Billing",
+ "title": "الفوترة",
"description": "Optimize costs and manage billing efficiently."
}
},
@@ -156,15 +156,15 @@
"watchOnYouTube": "Watch on YouTube",
"theGraphExplained": {
"title": "The Graph Explained In 1 Minute",
- "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video."
+ "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video."
},
"whatIsDelegating": {
"title": "What is Delegating?",
- "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating."
+ "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph."
},
"howToIndexSolana": {
"title": "How to Index Solana with a Substreams-powered Subgraph",
- "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph."
+ "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases."
}
},
"time": {
diff --git a/website/src/pages/ar/indexing/chain-integration-overview.mdx b/website/src/pages/ar/indexing/chain-integration-overview.mdx
index e6b95ec0fc17..af9a582b58d3 100644
--- a/website/src/pages/ar/indexing/chain-integration-overview.mdx
+++ b/website/src/pages/ar/indexing/chain-integration-overview.mdx
@@ -36,7 +36,7 @@ Ready to shape the future of The Graph Network? [Start your proposal](https://gi
### 2. ماذا يحدث إذا تم دعم فايرهوز و سبستريمز بعد أن تم دعم الشبكة على الشبكة الرئيسية؟
-هذا سيؤثر فقط على دعم البروتوكول لمكافآت الفهرسة على الغرافات الفرعية المدعومة من سبستريمز. تنفيذ الفايرهوز الجديد سيحتاج إلى الفحص على شبكة الاختبار، وفقًا للمنهجية الموضحة للمرحلة الثانية في هذا المقترح لتحسين الغراف. وعلى نحو مماثل، وعلى افتراض أن التنفيذ فعال وموثوق به، سيتتطالب إنشاء طلب سحب على [مصفوفة دعم الميزات] (https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) ("مصادر بيانات سبستريمز" ميزة للغراف الفرعي)، بالإضافة إلى مقترح جديد لتحسين الغراف، لدعم البروتوكول لمكافآت الفهرسة. يمكن لأي شخص إنشاء طلب السحب ومقترح تحسين الغراف؛ وسوف تساعد المؤسسة في الحصول على موافقة المجلس.
+This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval.
### 3. How much time will the process of reaching full protocol support take?
diff --git a/website/src/pages/ar/indexing/new-chain-integration.mdx b/website/src/pages/ar/indexing/new-chain-integration.mdx
index bff012725d9d..b204d002b25d 100644
--- a/website/src/pages/ar/indexing/new-chain-integration.mdx
+++ b/website/src/pages/ar/indexing/new-chain-integration.mdx
@@ -2,7 +2,7 @@
title: New Chain Integration
---
-Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies:
+Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies:
1. **EVM JSON-RPC**
2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms.
@@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through
## EVM considerations - Difference between JSON-RPC & Firehose
-While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing.
+While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing.
-- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes.
+- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes.
-> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers)
+> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers)
## تكوين عقدة الغراف
-Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph.
+Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph.
1. [استنسخ عقدة الغراف](https://github.com/graphprotocol/graph-node)
@@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your
## Substreams-powered Subgraphs
-For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself.
+For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself.
diff --git a/website/src/pages/ar/indexing/overview.mdx b/website/src/pages/ar/indexing/overview.mdx
index 3bfd1cc210c3..200a3a6a64e5 100644
--- a/website/src/pages/ar/indexing/overview.mdx
+++ b/website/src/pages/ar/indexing/overview.mdx
@@ -7,7 +7,7 @@ Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) i
GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers also earn rewards for delegated stake from Delegators, to contribute to the network.
-يختار المفهرسون subgraphs للقيام بالفهرسة بناء على إشارة تنسيق subgraphs ، حيث أن المنسقون يقومون ب staking ل GRT وذلك للإشارة ل Subgraphs عالية الجودة. يمكن أيضا للعملاء (مثل التطبيقات) تعيين بارامترات حيث يقوم المفهرسون بمعالجة الاستعلامات ل Subgraphs وتسعير رسوم الاستعلام.
+Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing.
## FAQ
@@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT.
**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity.
-**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network.
+**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network.
### How are indexing rewards distributed?
-Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.**
+Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.**
Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack.
### What is a proof of indexing (POI)?
-POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block.
+POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block.
### When are indexing rewards distributed?
@@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap
Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps:
-1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations:
+1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations:
```graphql
query indexerAllocations {
@@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that
- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%.
-### How do Indexers know which subgraphs to index?
+### How do Indexers know which Subgraphs to index?
-Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network:
+Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network:
-- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up.
+- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up.
-- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand.
+- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand.
-- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply.
+- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply.
-- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards.
+- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards.
### What are the hardware requirements?
-- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded.
+- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded.
- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests.
-- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second.
-- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic.
+- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second.
+- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic.
| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) |
| --- | :-: | :-: | :-: | :-: | :-: |
@@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making
## Infrastructure
-At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network.
+At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network.
-- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions.
+- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions.
-- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
+- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
-- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway.
-- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations.
+- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations.
- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server.
@@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer
| Port | Purpose | Routes | CLI Argument | Environment Variable |
| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - |
+| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - |
| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - |
| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
@@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer
| Port | Purpose | Routes | CLI Argument | Environment Variable |
| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
+| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Agent
@@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`.
### Graph Node
-[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint.
+[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint.
#### Getting started from source
@@ -365,9 +365,9 @@ docker-compose up
To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components:
-- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each.
+- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each.
-- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways.
+- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways.
- **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules.
@@ -525,7 +525,7 @@ graph indexer status
#### Indexer management using Indexer CLI
-The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution.
+The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution.
#### Usage
@@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar
- `graph indexer rules set [options] ...` - Set one or more indexing rules.
-- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed.
+- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed.
- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index.
@@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported
#### Indexing rules
-Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing.
+Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing.
-For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`.
+For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`.
Data model:
@@ -679,7 +679,7 @@ graph indexer actions execute approve
Note that supported action types for allocation management have different input requirements:
-- `Allocate` - allocate stake to a specific subgraph deployment
+- `Allocate` - allocate stake to a specific Subgraph deployment
- required action params:
- deploymentID
@@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input
- poi
- force (forces using the provided POI even if it doesn’t match what the graph-node provides)
-- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment
+- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment
- required action params:
- allocationID
@@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input
#### Cost models
-Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
+Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
#### Agora
@@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi
6. Call `stake()` to stake GRT in the protocol.
-7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address.
+7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address.
-8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks.
+8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks.
```
setDelegationParameters(950000, 600000, 500)
@@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st
After being created by an Indexer a healthy allocation goes through two states.
-- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules.
+- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules.
- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)).
-Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically.
+Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically.
diff --git a/website/src/pages/ar/indexing/supported-network-requirements.mdx b/website/src/pages/ar/indexing/supported-network-requirements.mdx
index 9c820d055399..4205fe314802 100644
--- a/website/src/pages/ar/indexing/supported-network-requirements.mdx
+++ b/website/src/pages/ar/indexing/supported-network-requirements.mdx
@@ -6,7 +6,7 @@ title: Supported Network Requirements
| --- | --- | --- | :-: |
| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ |
| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ |
-| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)
[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ |
+| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)
[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ |
| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ |
| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ |
| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ |
diff --git a/website/src/pages/ar/indexing/tap.mdx b/website/src/pages/ar/indexing/tap.mdx
index ee96a02cd5b8..e7085e5680bb 100644
--- a/website/src/pages/ar/indexing/tap.mdx
+++ b/website/src/pages/ar/indexing/tap.mdx
@@ -1,21 +1,21 @@
---
-title: TAP Migration Guide
+title: GraphTally Guide
---
-Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust.
+Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust.
## نظره عامة
-[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features:
+GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features:
- Efficiently handles micropayments.
- Adds a layer of consolidations to onchain transactions and costs.
- Allows Indexers control of receipts and payments, guaranteeing payment for queries.
- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders.
-## Specifics
+### Specifics
-TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process.
+GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process.
For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value.
@@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed
| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` |
| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` |
-### Requirements
+### Prerequisites
-In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`.
+In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`.
-- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD)
-- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1)
+- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD)
+- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1)
-> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually.
+> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually.
## Migration Guide
@@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc
1. **Indexer Agent**
- Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components).
- - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs.
+ - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs.
2. **Indexer Service**
@@ -128,18 +128,18 @@ query_url = ""
status_url = ""
[subgraphs.network]
-# Query URL for the Graph Network subgraph.
+# Query URL for the Graph Network Subgraph.
query_url = ""
# Optional, deployment to look for in the local `graph-node`, if locally indexed.
-# Locally indexing the subgraph is recommended.
+# Locally indexing the Subgraph is recommended.
# NOTE: Use `query_url` or `deployment_id` only
deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
[subgraphs.escrow]
-# Query URL for the Escrow subgraph.
+# Query URL for the Escrow Subgraph.
query_url = ""
# Optional, deployment to look for in the local `graph-node`, if locally indexed.
-# Locally indexing the subgraph is recommended.
+# Locally indexing the Subgraph is recommended.
# NOTE: Use `query_url` or `deployment_id` only
deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
diff --git a/website/src/pages/ar/indexing/tooling/graph-node.mdx b/website/src/pages/ar/indexing/tooling/graph-node.mdx
index 0250f14a3d08..edde8a157fd3 100644
--- a/website/src/pages/ar/indexing/tooling/graph-node.mdx
+++ b/website/src/pages/ar/indexing/tooling/graph-node.mdx
@@ -2,31 +2,31 @@
title: Graph Node
---
-Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer.
+Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer.
This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node).
## Graph Node
-[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query.
+[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query.
Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node).
### PostgreSQL database
-The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache.
+The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache.
### Network clients
In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple.
-While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)).
+While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)).
**Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/).
### IPFS Nodes
-Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
### Prometheus metrics server
@@ -79,8 +79,8 @@ When it is running Graph Node exposes the following ports:
| Port | Purpose | Routes | CLI Argument | Environment Variable |
| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - |
+| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - |
| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - |
| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
@@ -89,7 +89,7 @@ When it is running Graph Node exposes the following ports:
## Advanced Graph Node configuration
-At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed.
+At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed.
This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables.
@@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https:
#### Multiple Graph Nodes
-Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules).
+Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules).
> Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding.
#### Deployment rules
-Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision.
+Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision.
Example deployment rule configuration:
@@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ]
match = { network = [ "xdai", "poa-core" ] }
indexers = [ "index_node_other_0" ]
[[deployment.rule]]
-# There's no 'match', so any subgraph matches
+# There's no 'match', so any Subgraph matches
shards = [ "sharda", "shardb" ]
indexers = [
"index_node_community_0",
@@ -167,11 +167,11 @@ Any node whose --node-id matches the regular expression will be set up to only r
For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard.
-Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed.
+Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed.
Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore.
-> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs.
+> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs.
In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them.
@@ -188,7 +188,7 @@ ingestor = "block_ingestor_node"
#### Supporting multiple networks
-The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of:
+The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of:
- Multiple networks
- Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows).
@@ -225,11 +225,11 @@ Users who are operating a scaled indexing setup with advanced configuration may
### Managing Graph Node
-Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs.
+Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs.
#### Logging
-Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace.
+Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace.
In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs).
@@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker
Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs`
-### Working with subgraphs
+### Working with Subgraphs
#### Indexing status API
-Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more.
+Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more.
The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql).
@@ -263,7 +263,7 @@ There are three separate parts of the indexing process:
- Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store)
- Writing the resulting data to the store
-These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph.
+These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph.
Common causes of indexing slowness:
@@ -276,24 +276,24 @@ Common causes of indexing slowness:
- The provider itself falling behind the chain head
- Slowness in fetching new receipts at the chain head from the provider
-Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance.
+Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance.
-#### Failed subgraphs
+#### Failed Subgraphs
-During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure:
+During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure:
- Deterministic failures: these are failures which will not be resolved with retries
- Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time.
-In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required.
+In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required.
-> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository.
+> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository.
#### Block and call cache
-Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph.
+Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph.
-However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider.
+However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider.
If a block cache inconsistency is suspected, such as a tx receipt missing event:
@@ -304,7 +304,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event:
#### Querying issues and errors
-Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process.
+Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process.
However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users.
@@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat
##### Analysing queries
-Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible.
+Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible.
In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue.
@@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the
Once a table has been determined to be account-like, running `graphman stats account-like .` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again.
-For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load.
+For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load.
-#### Removing subgraphs
+#### Removing Subgraphs
> This is new functionality, which will be available in Graph Node 0.29.x
-At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
+At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
diff --git a/website/src/pages/ar/indexing/tooling/graphcast.mdx b/website/src/pages/ar/indexing/tooling/graphcast.mdx
index 8fc00976ec28..d084edcd7067 100644
--- a/website/src/pages/ar/indexing/tooling/graphcast.mdx
+++ b/website/src/pages/ar/indexing/tooling/graphcast.mdx
@@ -10,10 +10,10 @@ Currently, the cost to broadcast information to other network participants is de
The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases:
-- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)).
-- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers.
-- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc.
-- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc.
+- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)).
+- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers.
+- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc.
+- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc.
- Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc.
### Learn More
diff --git a/website/src/pages/ar/resources/benefits.mdx b/website/src/pages/ar/resources/benefits.mdx
index 2e1a0834591c..00a32f92a1a3 100644
--- a/website/src/pages/ar/resources/benefits.mdx
+++ b/website/src/pages/ar/resources/benefits.mdx
@@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar
‡Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries.
-Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet.
+Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet.
-Curating signal on a subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a subgraph, and later withdrawn—with potential to earn returns in the process).
+Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process).
## No Setup Costs & Greater Operational Efficiency
@@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy
Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally.
-Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/).
+Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/).
diff --git a/website/src/pages/ar/resources/glossary.mdx b/website/src/pages/ar/resources/glossary.mdx
index f922950390a6..d456a94f63ab 100644
--- a/website/src/pages/ar/resources/glossary.mdx
+++ b/website/src/pages/ar/resources/glossary.mdx
@@ -4,51 +4,51 @@ title: قائمة المصطلحات
- **The Graph**: A decentralized protocol for indexing and querying data.
-- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer.
+- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer.
-- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs.
+- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
-- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network.
+- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network.
-- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone.
+- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone.
- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries.
- **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards.
- 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network.
+ 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network.
- 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually.
+ 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually.
- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit.
- **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake.
-- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers.
+- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers.
-- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs.
+- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs.
- **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned.
-- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph.
+- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph.
-- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned.
+- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned.
-- **Data Consumer**: Any application or user that queries a subgraph.
+- **Data Consumer**: Any application or user that queries a Subgraph.
-- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network.
+- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network.
-- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example.
+- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example.
- **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day.
-- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses:
+- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses:
- 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated.
+ 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated.
- 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data.
+ 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data.
-- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs.
+- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs.
- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide.
@@ -56,28 +56,28 @@ title: قائمة المصطلحات
- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned.
-- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT.
+- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT.
- **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT.
- **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network.
-- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer.
+- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer.
-- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer.
+- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer.
-- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations.
+- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations.
- **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way.
-- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol.
+- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol.
- **Graph CLI**: A command line interface tool for building and deploying to The Graph.
- **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again.
-- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake.
+- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake.
-- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings.
+- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings.
-- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2).
+- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2).
diff --git a/website/src/pages/ar/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/ar/resources/migration-guides/assemblyscript-migration-guide.mdx
index 9fe263f2f8b2..40086bb24579 100644
--- a/website/src/pages/ar/resources/migration-guides/assemblyscript-migration-guide.mdx
+++ b/website/src/pages/ar/resources/migration-guides/assemblyscript-migration-guide.mdx
@@ -2,13 +2,13 @@
title: دليل ترحيل AssemblyScript
---
-Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉
+Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉
-سيمكن ذلك لمطوري ال Subgraph من استخدام مميزات أحدث للغة AS والمكتبة القياسية.
+That will enable Subgraph developers to use newer features of the AS language and standard library.
This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂
-> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest.
+> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest.
## مميزات
@@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `
## كيف تقوم بالترقية؟
-1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`:
+1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`:
```yaml
...
@@ -52,7 +52,7 @@ dataSources:
...
mapping:
...
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
...
```
@@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null
maybeValue.aMethod()
```
-إذا لم تكن متأكدا من اختيارك ، فنحن نوصي دائما باستخدام الإصدار الآمن. إذا كانت القيمة غير موجودة ، فقد ترغب في القيام بعبارة if المبكرة مع قيمة راجعة في معالج الـ subgraph الخاص بك.
+If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler.
### Variable Shadowing
@@ -132,7 +132,7 @@ in assembly/index.ts(4,3)
### مقارانات Null
-من خلال إجراء الترقية على ال Subgraph الخاص بك ، قد تحصل أحيانًا على أخطاء مثل هذه:
+By doing the upgrade on your Subgraph, sometimes you might get errors like these:
```typescript
ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'.
@@ -330,7 +330,7 @@ let wrapper = new Wrapper(y)
wrapper.n = wrapper.n + x // doesn't give compile time errors as it should
```
-لقد فتحنا مشكلة في مترجم AssemblyScript ، ولكن في الوقت الحالي إذا أجريت هذا النوع من العمليات في Subgraph mappings ، فيجب عليك تغييرها لإجراء فحص ل null قبل ذلك.
+We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it.
```typescript
let wrapper = new Wrapper(y)
@@ -352,7 +352,7 @@ value.x = 10
value.y = 'content'
```
-سيتم تجميعها لكنها ستتوقف في وقت التشغيل ، وهذا يحدث لأن القيمة لم تتم تهيئتها ، لذا تأكد من أن ال subgraph قد قام بتهيئة قيمها ، على النحو التالي:
+It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this:
```typescript
var value = new Type() // initialized
diff --git a/website/src/pages/ar/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/ar/resources/migration-guides/graphql-validations-migration-guide.mdx
index 29fed533ef8c..ebed96df1002 100644
--- a/website/src/pages/ar/resources/migration-guides/graphql-validations-migration-guide.mdx
+++ b/website/src/pages/ar/resources/migration-guides/graphql-validations-migration-guide.mdx
@@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide.
You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries.
-> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid.
+> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid.
## Migration CLI tool
diff --git a/website/src/pages/ar/resources/roles/curating.mdx b/website/src/pages/ar/resources/roles/curating.mdx
index d2f355055aac..e73785e92590 100644
--- a/website/src/pages/ar/resources/roles/curating.mdx
+++ b/website/src/pages/ar/resources/roles/curating.mdx
@@ -2,37 +2,37 @@
title: Curating
---
-Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index.
+Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index.
## What Does Signaling Mean for The Graph Network?
-Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed.
+Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed.
-Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives.
+Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives.
-Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them.
+Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them.
-The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
+The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
-When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
+When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
-If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with.
+If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with.
-Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below).
+Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below).
-
+
## كيفية الإشارة
-Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/)
+Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/)
-يمكن للمنسق الإشارة إلى إصدار معين ل subgraph ، أو يمكنه اختيار أن يتم ترحيل migrate إشاراتهم تلقائيا إلى أحدث إصدار لهذا ال subgraph. كلاهما استراتيجيات سليمة ولها إيجابيات وسلبيات.
+A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons.
-Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred.
+Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred.
Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares.
-> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy.
+> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy.
## Withdrawing your GRT
@@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time.
Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax).
-Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled.
+Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled.
-However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph.
+However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph.
## المخاطر
1. سوق الاستعلام يعتبر حديثا في The Graph وهناك خطر من أن يكون٪ APY الخاص بك أقل مما تتوقع بسبب ديناميكيات السوق الناشئة.
-2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned.
-3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/).
-4. يمكن أن يفشل ال subgraph بسبب خطأ. ال subgraph الفاشل لا يمكنه إنشاء رسوم استعلام. نتيجة لذلك ، سيتعين عليك الانتظار حتى يصلح المطور الخطأ وينشر إصدارا جديدا.
- - إذا كنت مشتركا في أحدث إصدار من subgraph ، فسيتم ترحيل migrate أسهمك تلقائيا إلى هذا الإصدار الجديد. هذا سيتحمل ضريبة تنسيق بنسبة 0.5٪.
- - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax.
+2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned.
+3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/).
+4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version.
+ - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax.
+ - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax.
## الأسئلة الشائعة حول التنسيق
### 1. ما هي النسبة المئوية لرسوم الاستعلام التي يكسبها المنسقون؟
-By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance.
+By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance.
-### 2. كيف يمكنني تقرير ما إذا كان ال subgraph عالي الجودة لكي أقوم بالإشارة إليه؟
+### 2. How do I decide which Subgraphs are high quality to signal on?
-Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result:
+Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result:
-- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future
-- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on.
+- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future
+- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on.
-### 3. What’s the cost of updating a subgraph?
+### 3. What’s the cost of updating a Subgraph?
-Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas.
+Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas.
-### 4. How often can I update my subgraph?
+### 4. How often can I update my Subgraph?
-It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details.
+It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details.
### 5. هل يمكنني بيع أسهم التنسيق الخاصة بي؟
diff --git a/website/src/pages/ar/resources/subgraph-studio-faq.mdx b/website/src/pages/ar/resources/subgraph-studio-faq.mdx
index 74c0228e4093..ec613ed68df2 100644
--- a/website/src/pages/ar/resources/subgraph-studio-faq.mdx
+++ b/website/src/pages/ar/resources/subgraph-studio-faq.mdx
@@ -4,7 +4,7 @@ title: الأسئلة الشائعة حول الفرعيةرسم بياني اس
## 1. What is Subgraph Studio?
-[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys.
+[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys.
## 2. How do I create an API Key?
@@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th
After creating an API Key, in the Security section, you can define the domains that can query a specific API Key.
-## 5. Can I transfer my subgraph to another owner?
+## 5. Can I transfer my Subgraph to another owner?
-Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'.
+Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'.
-Note that you will no longer be able to see or edit the subgraph in Studio once it has been transferred.
+Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred.
-## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use?
+## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use?
-You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio.
+You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio.
-تذكر أنه يمكنك إنشاء API key والاستعلام عن أي subgraph منشور على الشبكة ، حتى إذا قمت ببناء subgraph بنفسك. حيث أن الاستعلامات عبر API key الجديد ، هي استعلامات مدفوعة مثل أي استعلامات أخرى على الشبكة.
+Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network.
diff --git a/website/src/pages/ar/resources/tokenomics.mdx b/website/src/pages/ar/resources/tokenomics.mdx
index 511af057534f..fa0f098b22c8 100644
--- a/website/src/pages/ar/resources/tokenomics.mdx
+++ b/website/src/pages/ar/resources/tokenomics.mdx
@@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s
## نظره عامة
-The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph.
+The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph.
## Specifics
@@ -24,9 +24,9 @@ There are four primary network participants:
1. Delegators - Delegate GRT to Indexers & secure the network
-2. المنسقون (Curators) - يبحثون عن أفضل subgraphs للمفهرسين
+2. Curators - Find the best Subgraphs for Indexers
-3. Developers - Build & query subgraphs
+3. Developers - Build & query Subgraphs
4. المفهرسون (Indexers) - العمود الفقري لبيانات blockchain
@@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth
## Delegators (Passively earn GRT)
-Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually.
+Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually.
For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually.
@@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head
## Curators (Earn GRT)
-Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed.
+Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed.
-Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation.
+Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation.
-Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT.
+Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT.
## Developers
-Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants.
+Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants.
-### إنشاء subgraph
+### Creating a Subgraph
-Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers.
+Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers.
-Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network.
+Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network.
-### الاستعلام عن subgraph موجود
+### Querying an existing Subgraph
-Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph.
+Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph.
Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol.
@@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th
## Indexers (Earn GRT)
-Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs.
+Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs.
Indexers can earn GRT rewards in two ways:
-1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)).
+1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)).
-2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately.
+2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately.
-Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph.
+Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph.
In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve.
-Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network.
+Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network.
The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors.
## Token Supply: Burning & Issuance
-The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network.
+The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network.
-The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data.
+The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data.

diff --git a/website/src/pages/ar/sps/introduction.mdx b/website/src/pages/ar/sps/introduction.mdx
index 2336653c0e06..e74abf2f0998 100644
--- a/website/src/pages/ar/sps/introduction.mdx
+++ b/website/src/pages/ar/sps/introduction.mdx
@@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs
sidebarTitle: مقدمة
---
-Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data.
+Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data.
## نظره عامة
-Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks.
+Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks.
### Specifics
There are two methods of enabling this technology:
-1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph.
+1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph.
-2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities.
+2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities.
-You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node.
+You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node.
### مصادر إضافية
@@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil
- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
diff --git a/website/src/pages/ar/sps/sps-faq.mdx b/website/src/pages/ar/sps/sps-faq.mdx
index 88f4ddbb66d7..c19b0a950297 100644
--- a/website/src/pages/ar/sps/sps-faq.mdx
+++ b/website/src/pages/ar/sps/sps-faq.mdx
@@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi
Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams.
-## ما هي الغرافات الفرعية المدعومة بسبستريمز؟
+## What are Substreams-powered Subgraphs?
-[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities.
+[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities.
-If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API.
+If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API.
-## كيف تختلف الغرافات الفرعية التي تعمل بسبستريمز عن الغرافات الفرعية؟
+## How are Substreams-powered Subgraphs different from Subgraphs?
Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain.
-By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times.
+By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times.
-## ما هي فوائد استخدام الغرافات الفرعية المدعومة بسبستريمز؟
+## What are the benefits of using Substreams-powered Subgraphs?
-Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka.
+Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka.
## ماهي فوائد سبستريمز؟
@@ -35,7 +35,7 @@ There are many benefits to using Substreams, including:
- High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery).
-- التوجيه لأي مكان: يمكنك توجيه بياناتك لأي مكان ترغب فيه: بوستجريسكيو، مونغو دي بي، كافكا، الغرافات الفرعية، الملفات المسطحة، جداول جوجل.
+- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets.
- Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks.
@@ -63,17 +63,17 @@ There are many benefits to using Firehose, including:
- يستفيد من الملفات المسطحة: يتم استخراج بيانات سلسلة الكتل إلى ملفات مسطحة، وهي أرخص وأكثر موارد الحوسبة تحسيناً.
-## أين يمكن للمطورين الوصول إلى مزيد من المعلومات حول الغرافات الفرعية المدعومة بسبستريمز و سبستريمز؟
+## Where can developers access more information about Substreams-powered Subgraphs and Substreams?
The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules.
-The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph.
+The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph.
The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code.
## What is the role of Rust modules in Substreams?
-تعتبر وحدات رست مكافئة لمعينات أسمبلي اسكريبت في الغرافات الفرعية. يتم ترجمتها إلى ويب أسيمبلي بنفس الطريقة، ولكن النموذج البرمجي يسمح بالتنفيذ الموازي. تحدد وحدات رست نوع التحويلات والتجميعات التي ترغب في تطبيقها على بيانات سلاسل الكتل الخام.
+Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data.
See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details.
@@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst
When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used.
-على سبيل المثال، يمكن لأحمد بناء وحدة أسعار اسواق الصرف اللامركزية، ويمكن لإبراهيم استخدامها لبناء مجمِّع حجم للتوكن المهتم بها، ويمكن لآدم دمج أربع وحدات أسعار ديكس فردية لإنشاء مورد أسعار. سيقوم طلب واحد من سبستريمز بتجميع جميع هذه الوحدات الفردية، وربطها معًا لتقديم تدفق بيانات أكثر تطوراً ودقة. يمكن استخدام هذا التدفق لملءغراف فرعي ويمكن الاستعلام عنه من قبل المستخدمين.
+As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers.
## كيف يمكنك إنشاء ونشر غراف فرعي مدعوم بسبستريمز؟
After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/).
-## أين يمكنني العثور على أمثلة على سبستريمز والغرافات الفرعية المدعومة بسبستريمز؟
+## Where can I find examples of Substreams and Substreams-powered Subgraphs?
-يمكنك زيارة [جيت هب](https://github.com/pinax-network/awesome-substreams) للعثور على أمثلة للسبستريمز والغرافات الفرعية المدعومة بسبستريمز.
+You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs.
-## ماذا تعني السبستريمز والغرافات الفرعية المدعومة بسبستريمز بالنسبة لشبكة الغراف؟
+## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network?
إن التكامل مع سبستريمز والغرافات الفرعية المدعومة بسبستريمز واعدة بالعديد من الفوائد، بما في ذلك عمليات فهرسة عالية الأداء وقابلية أكبر للتركيبية من خلال استخدام وحدات المجتمع والبناء عليها.
diff --git a/website/src/pages/ar/sps/triggers.mdx b/website/src/pages/ar/sps/triggers.mdx
index 05eccf4d55fb..1bf1a2cf3f51 100644
--- a/website/src/pages/ar/sps/triggers.mdx
+++ b/website/src/pages/ar/sps/triggers.mdx
@@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL.
## نظره عامة
-Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer.
+Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer.
-By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework.
+By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework.
### Defining `handleTransactions`
-The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created.
+The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created.
```tsx
export function handleTransactions(bytes: Uint8Array): void {
@@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file:
1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object
2. Looping over the transactions
-3. Create a new subgraph entity for every transaction
+3. Create a new Subgraph entity for every transaction
-To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/).
+To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/).
### مصادر إضافية
diff --git a/website/src/pages/ar/sps/tutorial.mdx b/website/src/pages/ar/sps/tutorial.mdx
index 21f99fff2832..c41b10d885cd 100644
--- a/website/src/pages/ar/sps/tutorial.mdx
+++ b/website/src/pages/ar/sps/tutorial.mdx
@@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana'
sidebarTitle: Tutorial
---
-Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token.
+Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token.
## Get Started
@@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs
### Step 2: Generate the Subgraph Manifest
-Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container:
+Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container:
```bash
substreams codegen subgraph
@@ -73,7 +73,7 @@ dataSources:
moduleName: map_spl_transfers # Module defined in the substreams.yaml
file: ./my-project-sol-v0.1.0.spkg
mapping:
- apiVersion: 0.0.7
+ apiVersion: 0.0.9
kind: substreams/graph-entities
file: ./src/mappings.ts
handler: handleTriggers
@@ -81,7 +81,7 @@ dataSources:
### Step 3: Define Entities in `schema.graphql`
-Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file.
+Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file.
Here is an example:
@@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s
With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory.
-The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id:
+The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id:
```ts
import { Protobuf } from 'as-proto/assembly'
@@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command:
npm run protogen
```
-This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler.
+This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler.
### Conclusion
-Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case.
+Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case.
### Video Tutorial
diff --git a/website/src/pages/ar/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/ar/subgraphs/best-practices/avoid-eth-calls.mdx
index e40a7b3712e4..07249c97dd2a 100644
--- a/website/src/pages/ar/subgraphs/best-practices/avoid-eth-calls.mdx
+++ b/website/src/pages/ar/subgraphs/best-practices/avoid-eth-calls.mdx
@@ -1,19 +1,19 @@
---
title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls
-sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls'
+sidebarTitle: Avoiding eth_calls
---
## TLDR
-`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`.
+`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`.
## Why Avoiding `eth_calls` Is a Best Practice
-Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed.
+Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed.
### What Does an eth_call Look Like?
-`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need:
+`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need:
```yaml
event Transfer(address indexed from, address indexed to, uint256 value);
@@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void {
}
```
-This is functional, however is not ideal as it slows down our subgraph’s indexing.
+This is functional, however is not ideal as it slows down our Subgraph’s indexing.
## How to Eliminate `eth_calls`
@@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within
event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo);
```
-With this update, the subgraph can directly index the required data without external calls:
+With this update, the Subgraph can directly index the required data without external calls:
```typescript
import { Address } from '@graphprotocol/graph-ts'
@@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c
The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call.
-Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0.
+Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0.
## Conclusion
-You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs.
+You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs.
## Subgraph Best Practices 1-6
diff --git a/website/src/pages/ar/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/ar/subgraphs/best-practices/derivedfrom.mdx
index db3a49928c89..093eb29255ab 100644
--- a/website/src/pages/ar/subgraphs/best-practices/derivedfrom.mdx
+++ b/website/src/pages/ar/subgraphs/best-practices/derivedfrom.mdx
@@ -1,11 +1,11 @@
---
title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom
-sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom'
+sidebarTitle: Arrays with @derivedFrom
---
## TLDR
-Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly.
+Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly.
## How to Use the `@derivedFrom` Directive
@@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema.
comments: [Comment!]! @derivedFrom(field: "post")
```
-`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient.
+`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient.
### Example Use Case for `@derivedFrom`
@@ -60,17 +60,17 @@ type Comment @entity {
Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded.
-This will not only make our subgraph more efficient, but it will also unlock three features:
+This will not only make our Subgraph more efficient, but it will also unlock three features:
1. We can query the `Post` and see all of its comments.
2. We can do a reverse lookup and query any `Comment` and see which post it comes from.
-3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings.
+3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings.
## Conclusion
-Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval.
+Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval.
For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/).
diff --git a/website/src/pages/ar/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/ar/subgraphs/best-practices/grafting-hotfix.mdx
index b77a40a5be90..d8de3e7a1fa2 100644
--- a/website/src/pages/ar/subgraphs/best-practices/grafting-hotfix.mdx
+++ b/website/src/pages/ar/subgraphs/best-practices/grafting-hotfix.mdx
@@ -1,26 +1,26 @@
---
title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment
-sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing'
+sidebarTitle: Grafting and Hotfixing
---
## TLDR
-Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones.
+Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones.
### نظره عامة
-This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services.
+This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services.
## Benefits of Grafting for Hotfixes
1. **Rapid Deployment**
- - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing.
- - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted.
+ - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing.
+ - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted.
2. **Data Preservation**
- - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records.
+ - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records.
- **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data.
3. **Efficiency**
@@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati
1. **Initial Deployment Without Grafting**
- - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected.
- - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes.
+ - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected.
+ - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes.
2. **Implementing the Hotfix with Grafting**
- **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event.
- - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix.
- - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph.
- - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible.
+ - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix.
+ - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph.
+ - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible.
3. **Post-Hotfix Actions**
- - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue.
- - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance.
+ - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue.
+ - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance.
> Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance.
- - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph.
+ - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph.
4. **Important Considerations**
- **Careful Block Selection**: Choose the graft block number carefully to prevent data loss.
- **Tip**: Use the block number of the last correctly processed event.
- - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID.
- - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment.
- - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features.
+ - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID.
+ - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment.
+ - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features.
## Example: Deploying a Hotfix with Grafting
-Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix.
+Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix.
1. **Failed Subgraph Manifest (subgraph.yaml)**
```yaml
- specVersion: 1.0.0
+ specVersion: 1.3.0
schema:
file: ./schema.graphql
dataSources:
@@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
startBlock: 5000000
mapping:
kind: ethereum/events
- apiVersion: 0.0.7
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Withdrawal
@@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
2. **New Grafted Subgraph Manifest (subgraph.yaml)**
```yaml
- specVersion: 1.0.0
+ specVersion: 1.3.0
schema:
file: ./schema.graphql
dataSources:
@@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
startBlock: 6000001 # Block after the last indexed block
mapping:
kind: ethereum/events
- apiVersion: 0.0.7
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Withdrawal
@@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
features:
- grafting
graft:
- base: QmBaseDeploymentID # Deployment ID of the failed subgraph
+ base: QmBaseDeploymentID # Deployment ID of the failed Subgraph
block: 6000000 # Last successfully indexed block
```
**Explanation:**
-- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract.
+- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract.
- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error.
- **Grafting Configuration**:
- - **base**: Deployment ID of the failed subgraph.
+ - **base**: Deployment ID of the failed Subgraph.
- **block**: Block number where grafting should begin.
3. **Deployment Steps**
@@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
- **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations.
- **Deploy the Subgraph**:
- Authenticate with the Graph CLI.
- - Deploy the new subgraph using `graph deploy`.
+ - Deploy the new Subgraph using `graph deploy`.
4. **Post-Deployment**
- - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point.
+ - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point.
- **Monitor Data**: Ensure that new data is being captured and the hotfix is effective.
- **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability.
@@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance.
-- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema.
+- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema.
- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing.
-- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability.
+- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability.
### Risk Management
@@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec
## Conclusion
-Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to:
+Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to:
- **Quickly Recover** from critical errors without re-indexing.
- **Preserve Historical Data**, maintaining continuity for applications and users.
- **Ensure Service Availability** by minimizing downtime during critical fixes.
-However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability.
+However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability.
## مصادر إضافية
- **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting
- **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID.
-By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable.
+By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable.
## Subgraph Best Practices 1-6
diff --git a/website/src/pages/ar/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/ar/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx
index 6ff60ec9ab34..3a633244e0f2 100644
--- a/website/src/pages/ar/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx
+++ b/website/src/pages/ar/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx
@@ -1,6 +1,6 @@
---
title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs
-sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs'
+sidebarTitle: Immutable Entities and Bytes as IDs
---
## TLDR
@@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend
### Reasons to Not Use Bytes as IDs
1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used.
-2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used.
+2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used.
3. Indexing and querying performance improvements are not desired.
### Concatenating With Bytes as IDs
-It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance.
+It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance.
Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant.
@@ -172,7 +172,7 @@ Query Response:
## Conclusion
-Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds.
+Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds.
Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/).
diff --git a/website/src/pages/ar/subgraphs/best-practices/pruning.mdx b/website/src/pages/ar/subgraphs/best-practices/pruning.mdx
index 1b51dde8894f..2d4f9ad803e0 100644
--- a/website/src/pages/ar/subgraphs/best-practices/pruning.mdx
+++ b/website/src/pages/ar/subgraphs/best-practices/pruning.mdx
@@ -1,11 +1,11 @@
---
title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning
-sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints'
+sidebarTitle: Pruning with indexerHints
---
## TLDR
-[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph.
+[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph.
## How to Prune a Subgraph With `indexerHints`
@@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest.
`indexerHints` has three `prune` options:
-- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0.
+- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0.
- `prune: `: Sets a custom limit on the number of historical blocks to retain.
- `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired.
-We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`:
+We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`:
```yaml
-specVersion: 1.0.0
+specVersion: 1.3.0
schema:
file: ./schema.graphql
indexerHints:
@@ -39,7 +39,7 @@ dataSources:
## Conclusion
-Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements.
+Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements.
## Subgraph Best Practices 1-6
diff --git a/website/src/pages/ar/subgraphs/best-practices/timeseries.mdx b/website/src/pages/ar/subgraphs/best-practices/timeseries.mdx
index 74e56c406044..d713d6cd8864 100644
--- a/website/src/pages/ar/subgraphs/best-practices/timeseries.mdx
+++ b/website/src/pages/ar/subgraphs/best-practices/timeseries.mdx
@@ -1,11 +1,11 @@
---
title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations
-sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations'
+sidebarTitle: Timeseries and Aggregations
---
## TLDR
-Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance.
+Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance.
## نظره عامة
@@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri
## How to Implement Timeseries and Aggregations
+### Prerequisites
+
+You need `spec version 1.1.0` for this feature.
+
### Defining Timeseries Entities
A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements:
@@ -51,7 +55,7 @@ Example:
type Data @entity(timeseries: true) {
id: Int8!
timestamp: Timestamp!
- price: BigDecimal!
+ amount: BigDecimal!
}
```
@@ -68,11 +72,11 @@ Example:
type Stats @aggregation(intervals: ["hour", "day"], source: "Data") {
id: Int8!
timestamp: Timestamp!
- sum: BigDecimal! @aggregate(fn: "sum", arg: "price")
+ sum: BigDecimal! @aggregate(fn: "sum", arg: "amount")
}
```
-In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum.
+In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum.
### Querying Aggregated Data
@@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar
### Conclusion
-Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach:
+Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach:
- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead.
- Simplifies Development: Eliminates the need for manual aggregation logic in mappings.
- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness.
-By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs.
+By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs.
## Subgraph Best Practices 1-6
diff --git a/website/src/pages/ar/subgraphs/billing.mdx b/website/src/pages/ar/subgraphs/billing.mdx
index e5b5deb5c4ef..71e44f86c1ab 100644
--- a/website/src/pages/ar/subgraphs/billing.mdx
+++ b/website/src/pages/ar/subgraphs/billing.mdx
@@ -4,12 +4,14 @@ title: الفوترة
## Querying Plans
-There are two plans to use when querying subgraphs on The Graph Network.
+There are two plans to use when querying Subgraphs on The Graph Network.
- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp.
- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases.
+Learn more about pricing [here](https://thegraph.com/studio-pricing/).
+
## Query Payments with credit card
diff --git a/website/src/pages/ar/subgraphs/developing/creating/advanced.mdx b/website/src/pages/ar/subgraphs/developing/creating/advanced.mdx
index d0f9bb2cc348..c35d101f373e 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/advanced.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/advanced.mdx
@@ -4,9 +4,9 @@ title: Advanced Subgraph Features
## نظره عامة
-Add and implement advanced subgraph features to enhanced your subgraph's built.
+Add and implement advanced Subgraph features to enhanced your Subgraph's built.
-Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below:
+Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below:
| Feature | Name |
| ---------------------------------------------------- | ---------------- |
@@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar
| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` |
| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` |
-For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be:
+For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be:
```yaml
-specVersion: 0.0.4
+specVersion: 1.3.0
description: Gravatar for Ethereum
features:
- fullTextSearch
@@ -25,7 +25,7 @@ features:
dataSources: ...
```
-> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used.
+> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used.
## Timeseries and Aggregations
@@ -33,9 +33,9 @@ Prerequisites:
- Subgraph specVersion must be ≥1.1.0.
-Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more.
+Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more.
-This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL.
+This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL.
### Example Schema
@@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified
## أخطاء غير فادحة
-Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic.
+Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic.
-> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio.
+> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio.
-Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest:
+Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest:
```yaml
-specVersion: 0.0.4
+specVersion: 1.3.0
description: Gravatar for Ethereum
features:
- nonFatalErrors
...
```
-The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example:
+The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example:
```graphql
foos(first: 100, subgraphError: allow) {
@@ -123,7 +123,7 @@ _meta {
}
```
-If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response:
+If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response:
```graphql
"data": {
@@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a
## IPFS/Arweave File Data Sources
-File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave.
+File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave.
> This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data.
@@ -221,7 +221,7 @@ templates:
- name: TokenMetadata
kind: file/ipfs
mapping:
- apiVersion: 0.0.7
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/mapping.ts
handler: handleMetadata
@@ -290,7 +290,7 @@ Example:
import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates'
const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm'
-//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs.
+//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs.
export function handleTransfer(event: TransferEvent): void {
let token = Token.load(event.params.tokenId.toString())
@@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured
This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity.
-> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file
+> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file
Congratulations, you are using file data sources!
-#### Deploying your subgraphs
+#### Deploying your Subgraphs
-You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0.
+You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0.
#### Limitations
-File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific:
+File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific:
- Entities created by File Data Sources are immutable, and cannot be updated
- File Data Source handlers cannot access entities from other file data sources
- Entities associated with File Data Sources cannot be accessed by chain-based handlers
-> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph!
+> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph!
Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future.
@@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra
> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0`
-Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments.
+Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments.
-- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data.
+- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data.
-- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain.
+- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain.
### How Topic Filters Work
-When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments.
+When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments.
- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event.
@@ -401,7 +401,7 @@ In this example:
#### Configuration in Subgraphs
-Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured:
+Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured:
```yaml
eventHandlers:
@@ -436,7 +436,7 @@ In this configuration:
- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender.
- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver.
-- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`.
+- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`.
#### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses
@@ -452,17 +452,17 @@ In this configuration:
- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender.
- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver.
-- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses.
+- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses.
## Declared eth_call
> Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node.
-Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel.
+Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel.
This feature does the following:
-- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency.
+- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency.
- Allows faster data fetching, resulting in quicker query responses and a better user experience.
- Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient.
@@ -474,7 +474,7 @@ This feature does the following:
#### Scenario without Declarative `eth_calls`
-Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings.
+Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings.
Traditionally, these calls might be made sequentially:
@@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds
#### How it Works
-1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel.
+1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel.
2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously.
-3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing.
+3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing.
#### Example Configuration in Subgraph Manifest
Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`.
-`Subgraph.yaml` using `event.address`:
+`subgraph.yaml` using `event.address`:
```yaml
eventHandlers:
@@ -524,7 +524,7 @@ Details for the example above:
- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)`
- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed.
-`Subgraph.yaml` using `event.params`
+`subgraph.yaml` using `event.params`
```yaml
calls:
@@ -535,22 +535,22 @@ calls:
> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network).
-When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed.
+When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed.
-A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level:
+A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level:
```yaml
description: ...
graft:
- base: Qm... # Subgraph ID of base subgraph
+ base: Qm... # Subgraph ID of base Subgraph
block: 7345624 # Block number
```
-When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph.
+When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph.
-Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied.
+Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied.
-The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways:
+The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways:
- يضيف أو يزيل أنواع الكيانات
- يزيل الصفات من أنواع الكيانات
@@ -560,4 +560,4 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o
- It adds or removes interfaces
- يغير للكيانات التي يتم تنفيذ الواجهة لها
-> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest.
+> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest.
diff --git a/website/src/pages/ar/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/ar/subgraphs/developing/creating/assemblyscript-mappings.mdx
index 2518d7620204..3062fe900657 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/assemblyscript-mappings.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/assemblyscript-mappings.mdx
@@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t
For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled.
-In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events:
+In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events:
```javascript
import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity'
@@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil
## توليد الكود
-In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources.
+In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources.
This is done with
@@ -80,7 +80,7 @@ This is done with
graph codegen [--output-dir ] []
```
-but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same:
+but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same:
```sh
# Yarn
@@ -90,7 +90,7 @@ yarn codegen
npm run codegen
```
-This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with.
+This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with.
```javascript
import {
@@ -102,12 +102,12 @@ import {
} from '../generated/Gravity/Gravity'
```
-In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with
+In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with
```javascript
'import { Gravatar } from '../generated/schema
```
-> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph.
+> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph.
-Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find.
+Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find.
diff --git a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md
index 5d90888ac378..5f964d3cbb78 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md
+++ b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md
@@ -1,5 +1,11 @@
# @graphprotocol/graph-ts
+## 0.38.0
+
+### Minor Changes
+
+- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings
+
## 0.37.0
### Minor Changes
diff --git a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx
index 8245a637cc8a..a721f6bcd8d4 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx
@@ -2,12 +2,12 @@
title: AssemblyScript API
---
-> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/).
+> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/).
-Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box:
+Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box:
- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`)
-- Code generated from subgraph files by `graph codegen`
+- Code generated from Subgraph files by `graph codegen`
You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript).
@@ -27,7 +27,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs:
### إصدارات
-The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph.
+The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph.
| الاصدار | ملاحظات الإصدار |
| :-: | --- |
@@ -223,7 +223,7 @@ It adds the following method on top of the `Bytes` API:
The `store` API allows to load, save and remove entities from and to the Graph Node store.
-Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities.
+Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities.
#### إنشاء الكيانات
@@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco
The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists.
-- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip.
-- For some subgraphs, these missed lookups can contribute significantly to the indexing time.
+- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip.
+- For some Subgraphs, these missed lookups can contribute significantly to the indexing time.
```typescript
let id = event.transaction.hash // or however the ID is constructed
@@ -380,11 +380,11 @@ The Ethereum API provides access to smart contracts, public state variables, con
#### دعم أنواع الإيثيريوم
-As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder.
+As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder.
-With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them.
+With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them.
-The following example illustrates this. Given a subgraph schema like
+The following example illustrates this. Given a Subgraph schema like
```graphql
type Transfer @entity {
@@ -483,7 +483,7 @@ class Log {
#### الوصول إلى حالة العقد الذكي Smart Contract
-The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block.
+The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block.
A common pattern is to access the contract from which an event originates. This is achieved with the following code:
@@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) {
As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically.
-Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address.
+Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address.
#### معالجة الاستدعاءات المعادة
@@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false
import { log } from '@graphprotocol/graph-ts
```
-The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument.
+The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument.
The `log` API includes the following functions:
@@ -590,7 +590,7 @@ The `log` API includes the following functions:
- `log.info(fmt: string, args: Array): void` - logs an informational message.
- `log.warning(fmt: string, args: Array): void` - logs a warning.
- `log.error(fmt: string, args: Array): void` - logs an error message.
-- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph.
+- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph.
The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on.
@@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId'))
The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited.
-On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed.
+On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed.
### Crypto API
@@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to
### DataSourceContext in Manifest
-The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`.
+The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`.
Here is a YAML example illustrating the usage of various types in the `context` section:
@@ -887,4 +887,4 @@ dataSources:
- `List`: Specifies a list of items. Each item needs to specify its type and data.
- `BigInt`: Specifies a large integer value. Must be quoted due to its large size.
-This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs.
+This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs.
diff --git a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/common-issues.mdx
index 6c50af984ad0..b0ce00e687e3 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/common-issues.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/common-issues.mdx
@@ -2,7 +2,7 @@
title: مشاكل شائعة في أسمبلي سكريبت (AssemblyScript)
---
-هناك بعض مشاكل [أسمبلي سكريبت](https://github.com/AssemblyScript/assemblyscript) المحددة، التي من الشائع الوقوع فيها أثتاء تطوير غرافٍ فرعي. وهي تتراوح في صعوبة تصحيح الأخطاء، ومع ذلك، فإنّ إدراكها قد يساعد. وفيما يلي قائمة غير شاملة لهذه المشاكل:
+There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues:
- `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object.
- لا يتم توريث النطاق في [دوال الإغلاق](https://www.assemblyscript.org/status.html#on-closures)، أي لا يمكن استخدام المتغيرات المعلنة خارج دوال الإغلاق. الشرح في [ النقاط الهامة للمطورين #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s).
diff --git a/website/src/pages/ar/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/ar/subgraphs/developing/creating/install-the-cli.mdx
index b55d24367e50..81469bc1837b 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/install-the-cli.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/install-the-cli.mdx
@@ -2,11 +2,11 @@
title: قم بتثبيت Graph CLI
---
-> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/).
+> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/).
## نظره عامة
-The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network.
+The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network.
## Getting Started
@@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest
yarn global add @graphprotocol/graph-cli
```
-The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started.
+The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started.
## إنشاء الـ Subgraph
### من عقد موجود
-The following command creates a subgraph that indexes all events of an existing contract:
+The following command creates a Subgraph that indexes all events of an existing contract:
```sh
graph init \
@@ -51,25 +51,25 @@ graph init \
- If any of the optional arguments are missing, it guides you through an interactive form.
-- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page.
+- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page.
### من مثال Subgraph
-The following command initializes a new project from an example subgraph:
+The following command initializes a new project from an example Subgraph:
```sh
graph init --from-example=example-subgraph
```
-- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated.
+- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated.
-- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events.
+- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events.
### Add New `dataSources` to an Existing Subgraph
-`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them.
+`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them.
-Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command:
+Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command:
```sh
graph add []
@@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is
يجب أن تتطابق ملف (ملفات) ABI مع العقد (العقود) الخاصة بك. هناك عدة طرق للحصول على ملفات ABI:
- إذا كنت تقوم ببناء مشروعك الخاص ، فمن المحتمل أن تتمكن من الوصول إلى أحدث ABIs.
-- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile.
-- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail.
-
-## SpecVersion Releases
-
-| الاصدار | ملاحظات الإصدار |
-| :-: | --- |
-| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
-| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
-| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs |
-| 0.0.9 | Supports `endBlock` feature |
-| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
-| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
-| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
-| 0.0.5 | Added support for event handlers having access to transaction receipts. |
-| 0.0.4 | Added support for managing subgraph features. |
+- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile.
+- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail.
diff --git a/website/src/pages/ar/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/ar/subgraphs/developing/creating/ql-schema.mdx
index 56d9abb39ae7..a9d52647e13e 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/ql-schema.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/ql-schema.mdx
@@ -4,7 +4,7 @@ title: The Graph QL Schema
## نظره عامة
-The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language.
+The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language.
> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section.
@@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar
Before defining entities, it is important to take a step back and think about how your data is structured and linked.
-- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform.
+- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform.
- It may be useful to imagine entities as "objects containing data", rather than as events or functions.
- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type.
- Each type that should be an entity is required to be annotated with an `@entity` directive.
@@ -141,7 +141,7 @@ type TokenBalance @entity {
Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived.
-For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical.
+For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical.
#### Example
@@ -160,7 +160,7 @@ type TokenBalance @entity {
}
```
-Here is an example of how to write a mapping for a subgraph with reverse lookups:
+Here is an example of how to write a mapping for a Subgraph with reverse lookups:
```typescript
let token = new Token(event.address) // Create Token
@@ -231,7 +231,7 @@ query usersWithOrganizations {
}
```
-This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query.
+This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query.
### إضافة تعليقات إلى المخطط (schema)
@@ -287,7 +287,7 @@ query {
}
```
-> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest.
+> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest.
## اللغات المدعومة
diff --git a/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx
index 8f2e787688c2..fa6c44e61fb2 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx
@@ -4,20 +4,32 @@ title: Starting Your Subgraph
## نظره عامة
-The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs.
+The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs.
-When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL.
+When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL.
-Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs.
+Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs.
### Start Building
-Start the process and build a subgraph that matches your needs:
+Start the process and build a Subgraph that matches your needs:
1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure
-2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component
+2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component
3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema
4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings
-5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features
+5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features
Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/).
+
+| الاصدار | ملاحظات الإصدار |
+| :-: | --- |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
+| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/ar/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/ar/subgraphs/developing/creating/subgraph-manifest.mdx
index ba893838ca4e..29a666a8a297 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/subgraph-manifest.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/subgraph-manifest.mdx
@@ -4,19 +4,19 @@ title: Subgraph Manifest
## نظره عامة
-The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query.
+The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query.
-The **subgraph definition** consists of the following files:
+The **Subgraph definition** consists of the following files:
-- `subgraph.yaml`: Contains the subgraph manifest
+- `subgraph.yaml`: Contains the Subgraph manifest
-- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL
+- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL
- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide)
### Subgraph Capabilities
-A single subgraph can:
+A single Subgraph can:
- Index data from multiple smart contracts (but not multiple networks).
@@ -24,12 +24,12 @@ A single subgraph can:
- Add an entry for each contract that requires indexing to the `dataSources` array.
-The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md).
+The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md).
-For the example subgraph listed above, `subgraph.yaml` is:
+For the example Subgraph listed above, `subgraph.yaml` is:
```yaml
-specVersion: 0.0.4
+specVersion: 1.3.0
description: Gravatar for Ethereum
repository: https://github.com/graphprotocol/graph-tooling
schema:
@@ -54,7 +54,7 @@ dataSources:
data: 'bar'
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Gravatar
@@ -79,47 +79,47 @@ dataSources:
## Subgraph Entries
-> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/).
+> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/).
الإدخالات الهامة لتحديث manifest هي:
-- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases.
+- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases.
-- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio.
+- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio.
-- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer.
+- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer.
- `features`: a list of all used [feature](#experimental-features) names.
-- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section.
+- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section.
-- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts.
+- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts.
- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created.
- `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`.
-- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development.
+- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development.
- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file.
- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings.
-- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store.
+- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store.
-- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store.
+- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store.
-- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract.
+- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract.
-A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array.
+A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array.
## Event Handlers
-Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic.
+Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic.
### Defining an Event Handler
-An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected.
+An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected.
```yaml
dataSources:
@@ -131,7 +131,7 @@ dataSources:
abi: Gravity
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Gravatar
@@ -149,11 +149,11 @@ dataSources:
## معالجات الاستدعاء(Call Handlers)
-While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured.
+While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured.
Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract.
-> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network.
+> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network.
### تعريف معالج الاستدعاء
@@ -169,7 +169,7 @@ dataSources:
abi: Gravity
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Gravatar
@@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han
### دالة الـ Mapping
-Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument:
+Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument:
```typescript
import { CreateGravatarCall } from '../generated/Gravity/Gravity'
@@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a
## معالجات الكتلة
-In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter.
+In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter.
### الفلاتر المدعومة
@@ -218,7 +218,7 @@ filter:
_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._
-> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing.
+> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing.
The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type.
@@ -232,7 +232,7 @@ dataSources:
abi: Gravity
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Gravatar
@@ -261,7 +261,7 @@ blockHandlers:
every: 10
```
-The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals.
+The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals.
#### Once Filter
@@ -276,7 +276,7 @@ blockHandlers:
kind: once
```
-The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing.
+The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing.
```ts
export function handleOnce(block: ethereum.Block): void {
@@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void {
### دالة الـ Mapping
-The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities.
+The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities.
```typescript
import { ethereum } from '@graphprotocol/graph-ts'
@@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de
Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them.
-To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false.
+To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false.
```yaml
eventHandlers:
@@ -360,7 +360,7 @@ dataSources:
abi: Factory
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/mappings/factory.ts
entities:
@@ -390,7 +390,7 @@ templates:
abi: Exchange
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/mappings/exchange.ts
entities:
@@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ
## كتل البدء
-The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created.
+The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created.
```yaml
dataSources:
@@ -467,7 +467,7 @@ dataSources:
startBlock: 6627917
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/mappings/factory.ts
entities:
@@ -488,13 +488,13 @@ dataSources:
## Indexer Hints
-The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning.
+The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning.
> This feature is available from `specVersion: 1.0.0`
### Prune
-`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include:
+`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include:
1. `"never"`: No pruning of historical data; retains the entire history.
2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance.
@@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde
prune: auto
```
-> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities.
+> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities.
History as of a given block is required for:
-- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history
-- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block
-- Rewinding the subgraph back to that block
+- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history
+- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block
+- Rewinding the Subgraph back to that block
If historical data as of the block has been pruned, the above capabilities will not be available.
> Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data.
-For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings:
+For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings:
To retain a specific amount of historical data:
@@ -532,3 +532,18 @@ To preserve the complete history of entity states:
indexerHints:
prune: never
```
+
+## SpecVersion Releases
+
+| الاصدار | ملاحظات الإصدار |
+| :-: | --- |
+| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
+| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/ar/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/ar/subgraphs/developing/creating/unit-testing-framework.mdx
index e72d68bef7c8..44c9fedacb10 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/unit-testing-framework.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/unit-testing-framework.mdx
@@ -2,12 +2,12 @@
title: اختبار وحدة Framework
---
-Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs.
+Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs.
## Benefits of Using Matchstick
- It's written in Rust and optimized for high performance.
-- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more.
+- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more.
## Getting Started
@@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra
### Using Matchstick
-To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified).
+To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified).
### CLI options
@@ -113,7 +113,7 @@ graph test path/to/file.test.ts
```sh
-c, --coverage Run the tests in coverage mode
--d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph)
+-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph)
-f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image.
-h, --help Show usage information
-l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes)
@@ -145,17 +145,17 @@ libsFolder: path/to/libs
manifestPath: path/to/subgraph.yaml
```
-### Demo subgraph
+### Demo Subgraph
You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph)
### Video tutorials
-Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h)
+Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h)
## Tests structure
-_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_
+_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_
### describe()
@@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im
There we go - we've created our first test! 👏
-Now in order to run our tests you simply need to run the following in your subgraph root folder:
+Now in order to run our tests you simply need to run the following in your Subgraph root folder:
`graph test Gravity`
@@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri
Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file.
-NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow:
+NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow:
`.test.ts` file:
@@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index'
import { ipfs } from '@graphprotocol/graph-ts'
import { gravatarFromIpfs } from './utils'
-// Export ipfs.map() callback in order for matchstck to detect it
+// Export ipfs.map() callback in order for matchstick to detect it
export { processGravatar } from './utils'
test('ipfs.cat', () => {
@@ -1172,7 +1172,7 @@ templates:
network: mainnet
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/token-lock-wallet.ts
handler: handleMetadata
@@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => {
## Test Coverage
-Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests.
+Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests.
The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked.
@@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as
## مصادر إضافية
-For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_).
+For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_).
## Feedback
diff --git a/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx
index 4f7dcd3864e8..3b2b1bbc70ae 100644
--- a/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx
+++ b/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx
@@ -1,12 +1,13 @@
---
title: Deploying a Subgraph to Multiple Networks
+sidebarTitle: Deploying to Multiple Networks
---
-This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/).
+This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/).
-## Deploying the subgraph to multiple networks
+## Deploying the Subgraph to multiple networks
-In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different.
+In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different.
### Using `graph-cli`
@@ -20,7 +21,7 @@ Options:
--network-file Networks config file path (default: "./networks.json")
```
-You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development.
+You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development.
> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks.
@@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit
> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option.
-Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`:
+Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`:
```yaml
# ...
@@ -96,7 +97,7 @@ yarn build --network sepolia
yarn build --network sepolia --network-file path/to/config
```
-The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this:
+The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this:
```yaml
# ...
@@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config
One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/).
-To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network:
+To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network:
```json
{
@@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional
}
```
-To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands:
+To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands:
```sh
# Mainnet:
@@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e
**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well.
-This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error.
+This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error.
-## Subgraph Studio subgraph archive policy
+## Subgraph Studio Subgraph archive policy
-A subgraph version in Studio is archived if and only if it meets the following criteria:
+A Subgraph version in Studio is archived if and only if it meets the following criteria:
- The version is not published to the network (or pending publish)
- The version was created 45 or more days ago
-- The subgraph hasn't been queried in 30 days
+- The Subgraph hasn't been queried in 30 days
-In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived.
+In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived.
-Every subgraph affected with this policy has an option to bring the version in question back.
+Every Subgraph affected with this policy has an option to bring the version in question back.
-## Checking subgraph health
+## Checking Subgraph health
-If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators.
+If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators.
-Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph:
+Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
```graphql
{
@@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of
}
```
-This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error.
+This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error.
diff --git a/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx
index d8880ef1a196..1e0826bfe148 100644
--- a/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx
+++ b/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx
@@ -2,23 +2,23 @@
title: Deploying Using Subgraph Studio
---
-Learn how to deploy your subgraph to Subgraph Studio.
+Learn how to deploy your Subgraph to Subgraph Studio.
-> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain.
+> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain.
## Subgraph Studio Overview
In [Subgraph Studio](https://thegraph.com/studio/), you can do the following:
-- View a list of subgraphs you've created
-- Manage, view details, and visualize the status of a specific subgraph
-- إنشاء وإدارة مفاتيح API الخاصة بك لـ subgraphs محددة
+- View a list of Subgraphs you've created
+- Manage, view details, and visualize the status of a specific Subgraph
+- Create and manage your API keys for specific Subgraphs
- Restrict your API keys to specific domains and allow only certain Indexers to query with them
-- Create your subgraph
-- Deploy your subgraph using The Graph CLI
-- Test your subgraph in the playground environment
-- Integrate your subgraph in staging using the development query URL
-- Publish your subgraph to The Graph Network
+- Create your Subgraph
+- Deploy your Subgraph using The Graph CLI
+- Test your Subgraph in the playground environment
+- Integrate your Subgraph in staging using the development query URL
+- Publish your Subgraph to The Graph Network
- Manage your billing
## Install The Graph CLI
@@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli
1. Open [Subgraph Studio](https://thegraph.com/studio/).
2. Connect your wallet to sign in.
- You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe.
-3. After you sign in, your unique deploy key will be displayed on your subgraph details page.
- - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised.
+3. After you sign in, your unique deploy key will be displayed on your Subgraph details page.
+ - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised.
-> Important: You need an API key to query subgraphs
+> Important: You need an API key to query Subgraphs
### How to Create a Subgraph in Subgraph Studio
@@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli
### توافق الـ Subgraph مع شبكة The Graph
-In order to be supported by Indexers on The Graph Network, subgraphs must:
-
-- Index a [supported network](/supported-networks/)
-- يجب ألا تستخدم أيًا من الميزات التالية:
- - ipfs.cat & ipfs.map
- - أخطاء غير فادحة
- - Grafting
+To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo.
## Initialize Your Subgraph
-Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command:
+Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command:
```bash
graph init
```
-You can find the `` value on your subgraph details page in Subgraph Studio, see image below:
+You can find the `` value on your Subgraph details page in Subgraph Studio, see image below:

-After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected.
+After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected.
## Graph Auth
-Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page.
+Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page.
Then, use the following command to authenticate from the CLI:
@@ -91,11 +85,11 @@ graph auth
## Deploying a Subgraph
-Once you are ready, you can deploy your subgraph to Subgraph Studio.
+Once you are ready, you can deploy your Subgraph to Subgraph Studio.
-> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network.
+> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network.
-Use the following CLI command to deploy your subgraph:
+Use the following CLI command to deploy your Subgraph:
```bash
graph deploy
@@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label.
## Testing Your Subgraph
-After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready.
+After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready.
-Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph.
+Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph.
## Publish Your Subgraph
-In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
+In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
## Versioning Your Subgraph with the CLI
-If you want to update your subgraph, you can do the following:
+If you want to update your Subgraph, you can do the following:
- You can deploy a new version to Studio using the CLI (it will only be private at this point).
- Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer).
-- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index.
+- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index.
-You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment.
+You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment.
-> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/).
+> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/).
## الأرشفة التلقائية لإصدارات الـ Subgraph
-Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio.
+Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio.
-> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived.
+> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived.

diff --git a/website/src/pages/ar/subgraphs/developing/developer-faq.mdx b/website/src/pages/ar/subgraphs/developing/developer-faq.mdx
index f0e9ba0cd865..016a7a8e5a04 100644
--- a/website/src/pages/ar/subgraphs/developing/developer-faq.mdx
+++ b/website/src/pages/ar/subgraphs/developing/developer-faq.mdx
@@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o
## Subgraph Related
-### 1. What is a subgraph?
+### 1. What is a Subgraph?
-A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query.
+A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query.
-### 2. What is the first step to create a subgraph?
+### 2. What is the first step to create a Subgraph?
-To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/).
+To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/).
-### 3. Can I still create a subgraph if my smart contracts don't have events?
+### 3. Can I still create a Subgraph if my smart contracts don't have events?
-It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data.
+It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data.
-If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower.
+If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower.
-### 4. Can I change the GitHub account associated with my subgraph?
+### 4. Can I change the GitHub account associated with my Subgraph?
-No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph.
+No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph.
-### 5. How do I update a subgraph on mainnet?
+### 5. How do I update a Subgraph on mainnet?
-You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on.
+You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on.
-### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying?
+### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying?
-يجب عليك إعادة نشر ال الفرعيةرسم بياني ، ولكن إذا لم يتغير الفرعيةرسم بياني (ID (IPFS hash ، فلن يضطر إلى المزامنة من البداية.
+You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning.
-### 7. How do I call a contract function or access a public state variable from my subgraph mappings?
+### 7. How do I call a contract function or access a public state variable from my Subgraph mappings?
Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state).
-### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings?
+### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings?
Not currently, as mappings are written in AssemblyScript.
@@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p
### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events?
-ضمن ال Subgraph ، تتم معالجة الأحداث دائمًا بالترتيب الذي تظهر به في الكتل ، بغض النظر عما إذا كان ذلك عبر عقود متعددة أم لا.
+Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not.
### 10. How are templates different from data sources?
-Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address.
+Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address.
Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates).
-### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`?
+### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`?
Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other.
@@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest
If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique.
-### 15. Can I delete my subgraph?
+### 15. Can I delete my Subgraph?
-Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph.
+Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph.
## Network Related
@@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul
Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks)
-### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync
+### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync
Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks)
-### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed?
+### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed?
نعم! جرب الأمر التالي ، مع استبدال "Organization / subgraphName" بالمؤسسة واسم الـ subgraph الخاص بك:
@@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... }
### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high?
-Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients.
+Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients.
## Miscellaneous
diff --git a/website/src/pages/ar/subgraphs/developing/introduction.mdx b/website/src/pages/ar/subgraphs/developing/introduction.mdx
index d3b71aaab704..946e62affbe7 100644
--- a/website/src/pages/ar/subgraphs/developing/introduction.mdx
+++ b/website/src/pages/ar/subgraphs/developing/introduction.mdx
@@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin
On The Graph, you can:
-1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/).
-2. Use GraphQL to query existing subgraphs.
+1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/).
+2. Use GraphQL to query existing Subgraphs.
### What is GraphQL?
-- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs.
+- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
### Developer Actions
-- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps.
-- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers.
-- Deploy, publish and signal your subgraphs within The Graph Network.
+- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps.
+- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers.
+- Deploy, publish and signal your Subgraphs within The Graph Network.
-### What are subgraphs?
+### What are Subgraphs?
-A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL.
+A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL.
-Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics.
+Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics.
diff --git a/website/src/pages/ar/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/managing/deleting-a-subgraph.mdx
index 5a4ac15e07fd..b8c2330ca49d 100644
--- a/website/src/pages/ar/subgraphs/developing/managing/deleting-a-subgraph.mdx
+++ b/website/src/pages/ar/subgraphs/developing/managing/deleting-a-subgraph.mdx
@@ -2,30 +2,30 @@
title: Deleting a Subgraph
---
-Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/).
+Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/).
-> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it.
+> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it.
## Step-by-Step
-1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/).
+1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/).
2. Click on the three-dots to the right of the "publish" button.
-3. Click on the option to "delete this subgraph":
+3. Click on the option to "delete this Subgraph":

-4. Depending on the subgraph's status, you will be prompted with various options.
+4. Depending on the Subgraph's status, you will be prompted with various options.
- - If the subgraph is not published, simply click “delete” and confirm.
- - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required.
+ - If the Subgraph is not published, simply click “delete” and confirm.
+ - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required.
-> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner.
+> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner.
### Important Reminders
-- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal.
-- Curators will not be able to signal on the subgraph anymore.
-- Curators that already signaled on the subgraph can withdraw their signal at an average share price.
-- Deleted subgraphs will show an error message.
+- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal.
+- Curators will not be able to signal on the Subgraph anymore.
+- Curators that already signaled on the Subgraph can withdraw their signal at an average share price.
+- Deleted Subgraphs will show an error message.
diff --git a/website/src/pages/ar/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/managing/transferring-a-subgraph.mdx
index 0fc6632cbc40..e80bde3fa6d2 100644
--- a/website/src/pages/ar/subgraphs/developing/managing/transferring-a-subgraph.mdx
+++ b/website/src/pages/ar/subgraphs/developing/managing/transferring-a-subgraph.mdx
@@ -2,18 +2,18 @@
title: Transferring a Subgraph
---
-Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network.
+Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network.
## Reminders
-- Whoever owns the NFT controls the subgraph.
-- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network.
-- You can easily move control of a subgraph to a multi-sig.
-- A community member can create a subgraph on behalf of a DAO.
+- Whoever owns the NFT controls the Subgraph.
+- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network.
+- You can easily move control of a Subgraph to a multi-sig.
+- A community member can create a Subgraph on behalf of a DAO.
## View Your Subgraph as an NFT
-To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**:
+To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**:
```
https://opensea.io/your-wallet-address
@@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres
## Step-by-Step
-To transfer ownership of a subgraph, do the following:
+To transfer ownership of a Subgraph, do the following:
1. Use the UI built into Subgraph Studio:

-2. Choose the address that you would like to transfer the subgraph to:
+2. Choose the address that you would like to transfer the Subgraph to:

diff --git a/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx
index dca943ad3152..2bc0ec5f514c 100644
--- a/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx
+++ b/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx
@@ -1,10 +1,11 @@
---
title: Publishing a Subgraph to the Decentralized Network
+sidebarTitle: Publishing to the Decentralized Network
---
-Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network.
+Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network.
-When you publish a subgraph to the decentralized network, you make it available for:
+When you publish a Subgraph to the decentralized network, you make it available for:
- [Curators](/resources/roles/curating/) to begin curating it.
- [Indexers](/indexing/overview/) to begin indexing it.
@@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/).
1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard
2. Click on the **Publish** button
-3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/).
+3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/).
-All published versions of an existing subgraph can:
+All published versions of an existing Subgraph can:
- Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/).
-- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published.
+- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published.
-### Updating metadata for a published subgraph
+### Updating metadata for a published Subgraph
-- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio.
+- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio.
- Once you’ve saved your changes and published the updates, they will appear in Graph Explorer.
- It's important to note that this process will not create a new version since your deployment has not changed.
## Publishing from the CLI
-As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli).
+As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli).
1. Open the `graph-cli`.
2. Use the following commands: `graph codegen && graph build` then `graph publish`.
-3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice.
+3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice.

### Customizing your deployment
-You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags:
+You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags:
```
USAGE
@@ -61,33 +62,33 @@ FLAGS
```
-## Adding signal to your subgraph
+## Adding signal to your Subgraph
-Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph.
+Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph.
-- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled.
+- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled.
-- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md).
+- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md).
- Specific supported networks can be checked [here](/supported-networks/).
-> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers.
+> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers.
>
-> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph.
+> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph.
-The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
+The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
-When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
+When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
-Indexers can find subgraphs to index based on curation signals they see in Graph Explorer.
+Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer.
-
+
-Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published.
+Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published.

-Alternatively, you can add GRT signal to a published subgraph from Graph Explorer.
+Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer.

diff --git a/website/src/pages/ar/subgraphs/developing/subgraphs.mdx b/website/src/pages/ar/subgraphs/developing/subgraphs.mdx
index b52ec5cd2843..b2d94218cd67 100644
--- a/website/src/pages/ar/subgraphs/developing/subgraphs.mdx
+++ b/website/src/pages/ar/subgraphs/developing/subgraphs.mdx
@@ -4,83 +4,83 @@ title: Subgraphs
## What is a Subgraph?
-A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL.
+A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL.
### Subgraph Capabilities
- **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3.
-- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/).
-- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer).
+- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/).
+- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer).
## Inside a Subgraph
-The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query.
+The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query.
-The **subgraph definition** consists of the following files:
+The **Subgraph definition** consists of the following files:
-- `subgraph.yaml`: Contains the subgraph manifest
+- `subgraph.yaml`: Contains the Subgraph manifest
-- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL
+- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL
- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema
-To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/).
+To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/).
## دورة حياة الـ Subgraph
-Here is a general overview of a subgraph’s lifecycle:
+Here is a general overview of a Subgraph’s lifecycle:

## Subgraph Development
-1. [Create a subgraph](/developing/creating-a-subgraph/)
-2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/)
-3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio)
-4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/)
-5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph)
+1. [Create a Subgraph](/developing/creating-a-subgraph/)
+2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/)
+3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio)
+4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/)
+5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph)
### Build locally
-Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs.
+Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs.
### Deploy to Subgraph Studio
-Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following:
+Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following:
-- Use its staging environment to index the deployed subgraph and make it available for review.
-- Verify that your subgraph doesn't have any indexing errors and works as expected.
+- Use its staging environment to index the deployed Subgraph and make it available for review.
+- Verify that your Subgraph doesn't have any indexing errors and works as expected.
### Publish to the Network
-When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network.
+When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network.
-- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers.
-- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT.
-- Published subgraphs have associated metadata, which provides other network participants with useful context and information.
+- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers.
+- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT.
+- Published Subgraphs have associated metadata, which provides other network participants with useful context and information.
### Add Curation Signal for Indexing
-Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph.
+Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph.
#### What is signal?
-- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it.
-- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume.
+- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it.
+- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume.
### Querying & Application Development
Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/).
-Learn more about [querying subgraphs](/subgraphs/querying/introduction/).
+Learn more about [querying Subgraphs](/subgraphs/querying/introduction/).
### Updating Subgraphs
-To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing.
+To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing.
-- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax.
-- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying.
+- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax.
+- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying.
### Deleting & Transferring Subgraphs
-If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/).
+If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/).
diff --git a/website/src/pages/ar/subgraphs/explorer.mdx b/website/src/pages/ar/subgraphs/explorer.mdx
index 512be28e8322..57d7712cc383 100644
--- a/website/src/pages/ar/subgraphs/explorer.mdx
+++ b/website/src/pages/ar/subgraphs/explorer.mdx
@@ -2,11 +2,11 @@
title: Graph Explorer
---
-Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer).
+Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer).
## نظره عامة
-Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile.
+Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile.
## Inside Explorer
@@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi
### Subgraphs Page
-After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following:
+After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following:
-- Your own finished subgraphs
+- Your own finished Subgraphs
- Subgraphs published by others
-- The exact subgraph you want (based on the date created, signal amount, or name).
+- The exact Subgraph you want (based on the date created, signal amount, or name).

-When you click into a subgraph, you will be able to do the following:
+When you click into a Subgraph, you will be able to do the following:
- Test queries in the playground and be able to leverage network details to make informed decisions.
-- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality.
+- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality.
- - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries.
+ - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries.

-On each subgraph’s dedicated page, you can do the following:
+On each Subgraph’s dedicated page, you can do the following:
-- أشر/الغي الإشارة على Subgraphs
+- Signal/Un-signal on Subgraphs
- اعرض المزيد من التفاصيل مثل المخططات و ال ID الحالي وبيانات التعريف الأخرى
-- بدّل بين الإصدارات وذلك لاستكشاف التكرارات السابقة ل subgraphs
-- استعلم عن subgraphs عن طريق GraphQL
-- اختبار subgraphs في playground
-- اعرض المفهرسين الذين يفهرسون Subgraphs معين
+- Switch versions to explore past iterations of the Subgraph
+- Query Subgraphs via GraphQL
+- Test Subgraphs in the playground
+- View the Indexers that are indexing on a certain Subgraph
- إحصائيات subgraphs (المخصصات ، المنسقين ، إلخ)
-- اعرض من قام بنشر ال Subgraphs
+- View the entity who published the Subgraph

@@ -53,7 +53,7 @@ On this page, you can see the following:
- Indexers who collected the most query fees
- Indexers with the highest estimated APR
-Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph.
+Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph.
### Participants Page
@@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every

-Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs.
+Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs.
-In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards.
+In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
**Specifics**
@@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s
- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
-- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing.
+- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated.
- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations.
- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time.
@@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici
#### 3. المفوضون Delegators
-Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed.
+Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed.
-- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve.
- - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on.
+- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve.
+ - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on.
- The bonding curve incentivizes Curators to curate the highest quality data sources.
In the The Curator table listed below you can see:
@@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ
A few key details to note:
-- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers.
-- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days).
+- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers.
+- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days).

@@ -178,15 +178,15 @@ In this section, you can view the following:
### تبويب ال Subgraphs
-In the Subgraphs tab, you’ll see your published subgraphs.
+In the Subgraphs tab, you’ll see your published Subgraphs.
-> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network.
+> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network.

### تبويب الفهرسة
-In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer.
+In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer.
هذا القسم سيتضمن أيضا تفاصيل حول صافي مكافآت المفهرس ورسوم الاستعلام الصافي الخاصة بك. سترى المقاييس التالية:
@@ -223,13 +223,13 @@ In the Delegators tab, you can find the details of your active and historical de
### تبويب التنسيق Curating
-في علامة التبويب Curation ، ستجد جميع ال subgraphs التي تشير إليها (مما يتيح لك تلقي رسوم الاستعلام). الإشارة تسمح للمنسقين التوضيح للمفهرسين ماهي ال subgraphs ذات الجودة العالية والموثوقة ، مما يشير إلى ضرورة فهرستها.
+In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
ضمن علامة التبويب هذه ، ستجد نظرة عامة حول:
-- جميع ال subgraphs التي تقوم بتنسيقها مع تفاصيل الإشارة
-- إجمالي الحصة لكل subgraph
-- مكافآت الاستعلام لكل subgraph
+- All the Subgraphs you're curating on with signal details
+- Share totals per Subgraph
+- Query rewards per Subgraph
- تحديث في تفاصيل التاريخ

diff --git a/website/src/pages/ar/subgraphs/guides/_meta.js b/website/src/pages/ar/subgraphs/guides/_meta.js
index 37e18bc51651..a1bb04fb6d3f 100644
--- a/website/src/pages/ar/subgraphs/guides/_meta.js
+++ b/website/src/pages/ar/subgraphs/guides/_meta.js
@@ -1,4 +1,5 @@
export default {
+ 'subgraph-composition': '',
'subgraph-debug-forking': '',
near: '',
arweave: '',
diff --git a/website/src/pages/ar/subgraphs/guides/arweave.mdx b/website/src/pages/ar/subgraphs/guides/arweave.mdx
index 08e6c4257268..4bb8883b4bd0 100644
--- a/website/src/pages/ar/subgraphs/guides/arweave.mdx
+++ b/website/src/pages/ar/subgraphs/guides/arweave.mdx
@@ -53,7 +53,7 @@ $ graph codegen # generates types from the schema file identified in the manifes
$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder
```
-## Subgraph Manifest Definition
+## تعريف Subgraph Manifest
The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph:
@@ -92,12 +92,12 @@ Arweave data sources support two types of handlers:
- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner`
> The source.owner can be the owner's address, or their Public Key.
-
+>
> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users.
-
+>
> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet.
-## Schema Definition
+## تعريف المخطط
Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
@@ -162,7 +162,7 @@ graph deploy --access-token
The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information.
-## Example Subgraphs
+## أمثلة على الـ Subgraphs
Here is an example Subgraph for reference:
diff --git a/website/src/pages/ar/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/ar/subgraphs/guides/contract-analyzer.mdx
index 084ac8d28a00..84aeda12e0fc 100644
--- a/website/src/pages/ar/subgraphs/guides/contract-analyzer.mdx
+++ b/website/src/pages/ar/subgraphs/guides/contract-analyzer.mdx
@@ -2,11 +2,15 @@
title: Smart Contract Analysis with Cana CLI
---
-# Cana CLI: Quick & Efficient Contract Analysis
+Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains.
-**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains.
+## نظره عامة
-## 📌 Key Features
+**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more.
+
+### Key Features
+
+With Cana CLI, you can:
- Detect deployment blocks
- Verify source code
@@ -14,47 +18,59 @@ title: Smart Contract Analysis with Cana CLI
- Identify proxy and implementation contracts
- Support multiple chains
-## 🚀 Installation & Setup
+### Prerequisites
+
+Before installing Cana CLI, make sure you have:
+
+- [Node.js v16+](https://nodejs.org/en)
+- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install)
+- Block explorer API keys
+
+### Installation & Setup
-Install Cana globally using npm:
+1. Install Cana CLI
+
+Use npm to install it globally:
```bash
npm install -g contract-analyzer
```
-Set up a blockchain for analysis:
+2. Configure Cana CLI
+
+Set up a blockchain environment for analysis:
```bash
cana setup
```
-Provide the required block explorer API and block explorer endpoint URL details when prompted.
+During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL.
-Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use.
+After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use.
-## 🍳 Usage
+### Steps: Using Cana CLI for Smart Contract Analysis
-### 🔹 Chain Selection
+#### 1. Select a Chain
-Cana supports multiple EVM-compatible chains.
+Cana CLI supports multiple EVM-compatible chains.
-List chains added with:
+For a list of chains added run this command:
```bash
cana chains
```
-Then select a chain with:
+Then select a chain with this command:
```bash
cana chains --switch
```
-Once a chain is selected, all subsequent contract analases will continue on that chain.
+Once a chain is selected, all subsequent contract analyses will continue on that chain.
-### 🔹 Basic Contract Analysis
+#### 2. Basic Contract Analysis
-Analyze a contract with:
+Run the following command to analyze a contract:
```bash
cana analyze 0xContractAddress
@@ -66,11 +82,11 @@ or
cana -a 0xContractAddress
```
-This command displays essential contract information in the terminal using a clear, organized format.
+This command fetches and displays essential contract information in the terminal using a clear, organized format.
-### 🔹 Understanding Output
+#### 3. Understanding the Output
-Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved:
+Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved:
```
contracts-analyzed/
@@ -80,24 +96,22 @@ contracts-analyzed/
└── event-information.json # Event signatures and examples
```
-### 🔹 Chain Management
+This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development.
+
+#### 4. Chain Management
Add and manage chains:
```bash
-cana setup # Add a new chain
-cana chains # List configured chains
-cana chains -s # Swich chains.
+cana setup # Add a new chain
+cana chains # List configured chains
+cana chains -s # Switch chains
```
-## ⚠️ Troubleshooting
+### Troubleshooting
-- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions.
+Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions.
-## ✅ Requirements
-
-- Node.js v16+
-- npm v6+
-- Block explorer API keys
+### Conclusion
-Keep your contract analyses efficient and well-organized. 🚀
+With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease.
diff --git a/website/src/pages/ar/subgraphs/guides/enums.mdx b/website/src/pages/ar/subgraphs/guides/enums.mdx
index 9f55ae07c54b..846faecc1706 100644
--- a/website/src/pages/ar/subgraphs/guides/enums.mdx
+++ b/website/src/pages/ar/subgraphs/guides/enums.mdx
@@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent
}
```
-## Additional Resources
+## مصادر إضافية
For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums).
diff --git a/website/src/pages/ar/subgraphs/guides/grafting.mdx b/website/src/pages/ar/subgraphs/guides/grafting.mdx
index d9abe0e70d2a..4b7dad1a54d9 100644
--- a/website/src/pages/ar/subgraphs/guides/grafting.mdx
+++ b/website/src/pages/ar/subgraphs/guides/grafting.mdx
@@ -10,13 +10,13 @@ Grafting reuses the data from an existing Subgraph and starts indexing it at a l
The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways:
-- It adds or removes entity types
-- It removes attributes from entity types
+- يضيف أو يزيل أنواع الكيانات
+- يزيل الصفات من أنواع الكيانات
- It adds nullable attributes to entity types
- It turns non-nullable attributes into nullable attributes
- It adds values to enums
- It adds or removes interfaces
-- It changes for which entity types an interface is implemented
+- يغير للكيانات التي يتم تنفيذ الواجهة لها
For more information, you can check:
@@ -48,7 +48,7 @@ Building Subgraphs is an essential part of The Graph, described more in depth [h
> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit).
-## Subgraph Manifest Definition
+## تعريف Subgraph Manifest
The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use:
@@ -189,7 +189,7 @@ You can see that the `graft-replacement` Subgraph is indexing from older `graph-
Congrats! You have successfully grafted a Subgraph onto another Subgraph.
-## Additional Resources
+## مصادر إضافية
If you want more experience with grafting, here are a few examples for popular contracts:
diff --git a/website/src/pages/ar/subgraphs/guides/near.mdx b/website/src/pages/ar/subgraphs/guides/near.mdx
index e78a69eb7fa2..04daec8b6ac7 100644
--- a/website/src/pages/ar/subgraphs/guides/near.mdx
+++ b/website/src/pages/ar/subgraphs/guides/near.mdx
@@ -1,10 +1,10 @@
---
-title: Building Subgraphs on NEAR
+title: بناء Subgraphs على NEAR
---
This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/).
-## What is NEAR?
+## ما هو NEAR؟
[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information.
@@ -14,14 +14,14 @@ The Graph gives developers tools to process blockchain events and make the resul
Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs:
-- Block handlers: these are run on every new block
-- Receipt handlers: run every time a message is executed at a specified account
+- معالجات الكتل(Block handlers): يتم تشغيلها على كل كتلة جديدة
+- معالجات الاستلام (Receipt handlers): يتم تشغيلها في كل مرة يتم فيها تنفيذ رسالة على حساب محدد
[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt):
-> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point.
+> الاستلام (Receipt) هو الكائن الوحيد القابل للتنفيذ في النظام. عندما نتحدث عن "معالجة الإجراء" على منصة NEAR ، فإن هذا يعني في النهاية "تطبيق الاستلامات" في مرحلة ما.
-## Building a NEAR Subgraph
+## بناء NEAR Subgraph
`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs.
@@ -46,7 +46,7 @@ $ graph codegen # generates types from the schema file identified in the manifes
$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder
```
-### Subgraph Manifest Definition
+### تعريف Subgraph Manifest
The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph:
@@ -85,12 +85,12 @@ accounts:
- morning.testnet
```
-NEAR data sources support two types of handlers:
+مصادر بيانات NEAR تدعم نوعين من المعالجات:
- `blockHandlers`: run on every new NEAR block. No `source.account` is required.
- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources).
-### Schema Definition
+### تعريف المخطط
Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
@@ -169,7 +169,7 @@ Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/g
This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs.
-## Deploying a NEAR Subgraph
+## نشر NEAR Subgraph
Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released).
@@ -218,19 +218,19 @@ Once your Subgraph has been deployed, it will be indexed by Graph Node. You can
### Indexing NEAR with a Local Graph Node
-Running a Graph Node that indexes NEAR has the following operational requirements:
+تشغيل Graph Node التي تقوم بفهرسة NEAR لها المتطلبات التشغيلية التالية:
-- NEAR Indexer Framework with Firehose instrumentation
-- NEAR Firehose Component(s)
-- Graph Node with Firehose endpoint configured
+- NEAR Indexer Framework مع أجهزة Firehose
+- مكونات NEAR Firehose
+- تكوين Graph Node مع Firehose endpoint
-We will provide more information on running the above components soon.
+سوف نقدم المزيد من المعلومات حول تشغيل المكونات أعلاه قريبًا.
-## Querying a NEAR Subgraph
+## الاستعلام عن NEAR Subgraph
The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information.
-## Example Subgraphs
+## أمثلة على الـ Subgraphs
Here are some example Subgraphs for reference:
@@ -250,7 +250,7 @@ No, a Subgraph can only support data sources from one chain/network.
### Can Subgraphs react to more specific triggers?
-Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support.
+حاليًا ، يتم دعم مشغلات الكتلة(Block) والاستلام(Receipt). نحن نبحث في مشغلات استدعاءات الدوال لحساب محدد. نحن مهتمون أيضًا بدعم مشغلات الأحداث ، بمجرد حصول NEAR على دعم محلي للأحداث.
### Will receipt handlers trigger for accounts and their sub-accounts?
@@ -264,11 +264,11 @@ accounts:
### Can NEAR Subgraphs make view calls to NEAR accounts during mappings?
-This is not supported. We are evaluating whether this functionality is required for indexing.
+هذا غير مدعوم. نحن بصدد تقييم ما إذا كانت هذه الميزة مطلوبة للفهرسة.
### Can I use data source templates in my NEAR Subgraph?
-This is not currently supported. We are evaluating whether this functionality is required for indexing.
+هذا غير مدعوم حاليا. نحن بصدد تقييم ما إذا كانت هذه الميزة مطلوبة للفهرسة.
### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph?
@@ -278,6 +278,6 @@ Pending functionality is not yet supported for NEAR Subgraphs. In the interim, y
If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com.
-## References
+## المراجع
- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton)
diff --git a/website/src/pages/ar/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/ar/subgraphs/guides/secure-api-keys-nextjs.mdx
index e17e594408ff..21ac0b74d31d 100644
--- a/website/src/pages/ar/subgraphs/guides/secure-api-keys-nextjs.mdx
+++ b/website/src/pages/ar/subgraphs/guides/secure-api-keys-nextjs.mdx
@@ -2,7 +2,7 @@
title: How to Secure API Keys Using Next.js Server Components
---
-## Overview
+## نظره عامة
We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key).
diff --git a/website/src/pages/ar/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/ar/subgraphs/guides/subgraph-composition.mdx
new file mode 100644
index 000000000000..080de99b5ba1
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/guides/subgraph-composition.mdx
@@ -0,0 +1,132 @@
+---
+title: Aggregate Data Using Subgraph Composition
+sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs
+---
+
+Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it.
+
+Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation.
+
+## مقدمة
+
+Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset.
+
+### Benefits of Composition
+
+Subgraph composition is a powerful feature for scaling, allowing you to:
+
+- Reuse, mix, and combine existing data
+- Streamline development and queries
+- Use multiple data sources (up to five source Subgraphs)
+- Speed up your Subgraph's syncing speed
+- Handle errors and optimize the resync
+
+## Architecture Overview
+
+The setup for this example involves two Subgraphs:
+
+1. **Source Subgraph**: Tracks event data as entities.
+2. **Dependent Subgraph**: Uses the source Subgraph as a data source.
+
+You can find these in the `source` and `dependent` directories.
+
+- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts.
+- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers.
+
+While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature.
+
+## Prerequisites
+
+### Source Subgraphs
+
+- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs)
+- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0
+- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed
+- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of
+- Source Subgraphs cannot use grafting on top of existing entities
+- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly
+
+### Composed Subgraphs
+
+- You can only compose up to a **maximum of 5 source Subgraphs**
+- Composed Subgraphs can only use **datasources from the same chain**
+- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time
+- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly
+- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph)
+
+Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs
+
+## Get Started
+
+The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph.
+
+### Specifics
+
+- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts.
+- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality.
+- Each source Subgraph is optimized with a specific entity.
+- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance.
+
+### Step 1. Deploy Block Time Source Subgraph
+
+This first source Subgraph calculates the block time for each block.
+
+- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined.
+- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly.
+
+To deploy this Subgraph locally, run the following commands:
+
+```bash
+npm install
+npm run codegen
+npm run build
+npm run create-local
+npm run deploy-local
+```
+
+### Step 2. Deploy Block Cost Source Subgraph
+
+This second source Subgraph indexes the cost of each block.
+
+#### Key Functions
+
+- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields.
+- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly.
+
+To deploy this Subgraph locally, run the same commands as above.
+
+### Step 3. Define Block Size in Source Subgraph
+
+This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above.
+
+#### Key Functions
+
+- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size.
+- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly.
+
+### Step 4. Combine Into Block Stats Subgraph
+
+This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above.
+
+> Note:
+>
+> - Any change to a source Subgraph will likely generate a new deployment ID.
+> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes.
+> - All source Subgraphs should be deployed before the composed Subgraph is deployed.
+
+#### Key Functions
+
+- It provides a consolidated data model that encompasses all relevant block metrics.
+- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses.
+
+## Key Takeaways
+
+- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs.
+- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph.
+- This feature unlocks scalability, simplifying both development and maintenance efficiency.
+
+## مصادر إضافية
+
+- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph).
+- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/).
+- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations).
diff --git a/website/src/pages/ar/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/ar/subgraphs/guides/subgraph-debug-forking.mdx
index 91aa7484d2ec..364fb8ce4d9c 100644
--- a/website/src/pages/ar/subgraphs/guides/subgraph-debug-forking.mdx
+++ b/website/src/pages/ar/subgraphs/guides/subgraph-debug-forking.mdx
@@ -4,19 +4,19 @@ title: Quick and Easy Subgraph Debugging Using Forks
As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging!
-## Ok, what is it?
+## حسنا، ما هو؟
**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one).
In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_.
-## What?! How?
+## ماذا؟! كيف؟
When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_.
In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state.
-## Please, show me some code!
+## من فضلك ، أرني بعض الأكواد!
To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract.
@@ -46,31 +46,31 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void {
Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error.
-The usual way to attempt a fix is:
+الطريقة المعتادة لمحاولة الإصلاح هي:
-1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't).
+1. إجراء تغيير في مصدر الـ mappings ، والذي تعتقد أنه سيحل المشكلة (وأنا أعلم أنه لن يحلها).
2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node).
-3. Wait for it to sync-up.
-4. If it breaks again go back to 1, otherwise: Hooray!
+3. الانتظار حتى تتم المزامنة.
+4. إذا حدثت المشكلة مرة أخرى ، فارجع إلى 1!
It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._
Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks:
0. Spin-up a local Graph Node with the **_appropriate fork-base_** set.
-1. Make a change in the mappings source, which you believe will solve the issue.
+1. قم بإجراء تغيير في مصدر الـ mappings ، والذي تعتقد أنه سيحل المشكلة.
2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**.
-3. If it breaks again, go back to 1, otherwise: Hooray!
+3. إذا حدثت المشكلة مرة أخرى ، فارجع إلى 1!
-Now, you may have 2 questions:
+الآن ، قد يكون لديك سؤالان:
-1. fork-base what???
-2. Forking who?!
+1. ماهو fork-base؟؟؟
+2. ما الذي نقوم بتفريعه (Forking)؟!
-And I answer:
+وأنا أجيب:
1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store.
-2. Forking is easy, no need to sweat:
+2. الـتفريع سهل ، فلا داعي للقلق:
```bash
$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020
@@ -78,7 +78,7 @@ $ graph deploy --debug-fork --ipfs http://localhos
Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork!
-So, here is what I do:
+لذلك ، هذا ما أفعله:
1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/).
diff --git a/website/src/pages/ar/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/ar/subgraphs/guides/transfer-to-the-graph.mdx
index a62072c48373..4be3dcedffe8 100644
--- a/website/src/pages/ar/subgraphs/guides/transfer-to-the-graph.mdx
+++ b/website/src/pages/ar/subgraphs/guides/transfer-to-the-graph.mdx
@@ -12,9 +12,9 @@ Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized n
## Upgrade Your Subgraph to The Graph in 3 Easy Steps
-1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment)
-2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio)
-3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network)
+1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment)
+2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio)
+3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network)
## 1. Set Up Your Studio Environment
@@ -98,7 +98,7 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the
Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/).
-### Additional Resources
+### مصادر إضافية
- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/).
- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/).
diff --git a/website/src/pages/ar/subgraphs/querying/best-practices.mdx b/website/src/pages/ar/subgraphs/querying/best-practices.mdx
index 23dcd2cb8920..f469ff02de9c 100644
--- a/website/src/pages/ar/subgraphs/querying/best-practices.mdx
+++ b/website/src/pages/ar/subgraphs/querying/best-practices.mdx
@@ -4,7 +4,7 @@ title: أفضل الممارسات للاستعلام
The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language.
-Learn the essential GraphQL language rules and best practices to optimize your subgraph.
+Learn the essential GraphQL language rules and best practices to optimize your Subgraph.
---
@@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi
However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features:
-- التعامل مع ال subgraph عبر السلاسل: الاستعلام من عدة subgraphs عبر استعلام واحد
+- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query
- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- نتيجة مكتوبة بالكامل
@@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set `
### Use a single query to request multiple records
-By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}`
+By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}`
Example of inefficient querying:
diff --git a/website/src/pages/ar/subgraphs/querying/from-an-application.mdx b/website/src/pages/ar/subgraphs/querying/from-an-application.mdx
index 767a2caa9021..08c71fa4ad1f 100644
--- a/website/src/pages/ar/subgraphs/querying/from-an-application.mdx
+++ b/website/src/pages/ar/subgraphs/querying/from-an-application.mdx
@@ -1,5 +1,6 @@
---
title: الاستعلام من التطبيق
+sidebarTitle: Querying from an App
---
Learn how to query The Graph from your application.
@@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d
### Subgraph Studio Endpoint
-After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this:
+After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this:
```
https://api.studio.thegraph.com/query///
@@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query///
### The Graph Network Endpoint
-After publishing your subgraph to the network, you will receive an endpoint that looks like this: :
+After publishing your Subgraph to the network, you will receive an endpoint that looks like this: :
```
https://gateway.thegraph.com/api//subgraphs/id/
```
-> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data.
+> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data.
## Using Popular GraphQL Clients
@@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/
The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as:
-- التعامل مع ال subgraph عبر السلاسل: الاستعلام من عدة subgraphs عبر استعلام واحد
+- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query
- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- نتيجة مكتوبة بالكامل
@@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq
### Fetch Data with Graph Client
-Let's look at how to fetch data from a subgraph with `graph-client`:
+Let's look at how to fetch data from a Subgraph with `graph-client`:
#### Step 1
@@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on
### Fetch Data with Apollo Client
-Let's look at how to fetch data from a subgraph with Apollo client:
+Let's look at how to fetch data from a Subgraph with Apollo client:
#### Step 1
@@ -257,7 +258,7 @@ client
### Fetch data with URQL
-Let's look at how to fetch data from a subgraph with URQL:
+Let's look at how to fetch data from a Subgraph with URQL:
#### Step 1
diff --git a/website/src/pages/ar/subgraphs/querying/graph-client/README.md b/website/src/pages/ar/subgraphs/querying/graph-client/README.md
index 416cadc13c6f..d4850e723c6e 100644
--- a/website/src/pages/ar/subgraphs/querying/graph-client/README.md
+++ b/website/src/pages/ar/subgraphs/querying/graph-client/README.md
@@ -16,19 +16,19 @@ This library is intended to simplify the network aspect of data consumption for
| Status | Feature | Notes |
| :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
-| ✅ | Multiple indexers | based on fetch strategies |
-| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue |
-| ✅ | Build time validations & optimizations | |
-| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) |
-| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source |
-| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client |
-| ✅ | Local (client-side) Mutations | |
-| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) |
-| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit |
-| ✅ | Integration with `@apollo/client` | |
-| ✅ | Integration with `urql` | |
-| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` |
-| ✅ | [`@live` queries](./live.md) | Based on polling |
+| ✅ | Multiple indexers | based on fetch strategies |
+| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue |
+| ✅ | Build time validations & optimizations | |
+| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) |
+| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source |
+| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client |
+| ✅ | Local (client-side) Mutations | |
+| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) |
+| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit |
+| ✅ | Integration with `@apollo/client` | |
+| ✅ | Integration with `urql` | |
+| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` |
+| ✅ | [`@live` queries](./live.md) | Based on polling |
> You can find an [extended architecture design here](./architecture.md)
@@ -308,8 +308,8 @@ sources:
`highestValue`
-
- This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated.
+
+This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated.
This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources.
diff --git a/website/src/pages/ar/subgraphs/querying/graphql-api.mdx b/website/src/pages/ar/subgraphs/querying/graphql-api.mdx
index d73381f88a7d..14e11ff80306 100644
--- a/website/src/pages/ar/subgraphs/querying/graphql-api.mdx
+++ b/website/src/pages/ar/subgraphs/querying/graphql-api.mdx
@@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph.
## What is GraphQL?
-[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs.
+[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
-To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/).
+To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/).
## Queries with GraphQL
-In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type.
+In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type.
> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph.
@@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison:
You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`.
-This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block).
+This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block).
```graphql
{
@@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application`
### Fulltext Search Queries
-Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph.
+Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph.
Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field.
@@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021
The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System).
-GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph).
+GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph).
> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications.
@@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en
### Subgraph Metadata
-All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows:
+All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows:
```graphQL
{
@@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s
}
```
-If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block.
+If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block.
`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file.
@@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde
- hash: the hash of the block
- number: the block number
-- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks)
+- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks)
-`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block
+`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block
diff --git a/website/src/pages/ar/subgraphs/querying/introduction.mdx b/website/src/pages/ar/subgraphs/querying/introduction.mdx
index 281957e11e14..bdd0bde88865 100644
--- a/website/src/pages/ar/subgraphs/querying/introduction.mdx
+++ b/website/src/pages/ar/subgraphs/querying/introduction.mdx
@@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex
## نظره عامة
-When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph.
+When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph.
## Specifics
-Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner.
+Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner.

@@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an
Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/).
-> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities.
+> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities.
>
> Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead.
diff --git a/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx
index 33e9d7b78fc2..7b91a147ef47 100644
--- a/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx
+++ b/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx
@@ -4,11 +4,11 @@ title: Managing API keys
## نظره عامة
-API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application.
+API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application.
### Create and Manage API Keys
-Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs.
+Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs.
The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
@@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page:
- كمية GRT التي تم صرفها
2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can:
- عرض وإدارة أسماء النطاقات المصرح لها باستخدام مفتاح API الخاص بك
- - تعيين الـ subgraphs التي يمكن الاستعلام عنها باستخدام مفتاح API الخاص بك
+ - Assign Subgraphs that can be queried with your API key
diff --git a/website/src/pages/ar/subgraphs/querying/python.mdx b/website/src/pages/ar/subgraphs/querying/python.mdx
index 0937e4f7862d..ed0d078a4175 100644
--- a/website/src/pages/ar/subgraphs/querying/python.mdx
+++ b/website/src/pages/ar/subgraphs/querying/python.mdx
@@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds
sidebarTitle: Python (Subgrounds)
---
-Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis!
+Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis!
Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations.
@@ -17,14 +17,14 @@ pip install --upgrade subgrounds
python -m pip install --upgrade subgrounds
```
-Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame).
+Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame).
```python
from subgrounds import Subgrounds
sg = Subgrounds()
-# Load the subgraph
+# Load the Subgraph
aave_v2 = sg.load_subgraph(
"https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum")
diff --git a/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
index 103e470e14da..17258dd13ea1 100644
--- a/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
+++ b/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
@@ -2,17 +2,17 @@
title: Subgraph ID vs Deployment ID
---
-A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID.
+A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID.
-When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph.
+When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph.
Here are some key differences between the two IDs: 
## Deployment ID
-The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
+The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
-When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published.
+When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published.
Example endpoint that uses Deployment ID:
@@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID:
## Subgraph ID
-The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats.
+The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats.
-Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes.
+Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes.
Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW`
diff --git a/website/src/pages/ar/subgraphs/quick-start.mdx b/website/src/pages/ar/subgraphs/quick-start.mdx
index 42f4acf08df9..9b7bf860e87d 100644
--- a/website/src/pages/ar/subgraphs/quick-start.mdx
+++ b/website/src/pages/ar/subgraphs/quick-start.mdx
@@ -2,7 +2,7 @@
title: بداية سريعة
---
-Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph.
+Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph.
## Prerequisites
@@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/
## How to Build a Subgraph
-### 1. Create a subgraph in Subgraph Studio
+### 1. Create a Subgraph in Subgraph Studio
Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet.
-Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys.
+Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys.
-Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name".
+Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name".
### 2. Install the Graph CLI
@@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/):
yarn global add @graphprotocol/graph-cli
```
-### 3. Initialize your subgraph
+### 3. Initialize your Subgraph
-> يمكنك العثور على الأوامر المتعلقة بالغراف الفرعي الخاص بك على صفحة الغراف الفرعي في (سبغراف استوديو) (https://thegraph.com/studio).
+> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/).
-The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events.
+The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events.
-The following command initializes your subgraph from an existing contract:
+The following command initializes your Subgraph from an existing contract:
```sh
graph init
@@ -51,42 +51,42 @@ graph init
If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI.
-When you initialize your subgraph, the CLI will ask you for the following information:
+When you initialize your Subgraph, the CLI will ask you for the following information:
-- **Protocol**: Choose the protocol your subgraph will be indexing data from.
-- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph.
-- **Directory**: Choose a directory to create your subgraph in.
-- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from.
+- **Protocol**: Choose the protocol your Subgraph will be indexing data from.
+- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph.
+- **Directory**: Choose a directory to create your Subgraph in.
+- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from.
- **Contract address**: Locate the smart contract address you’d like to query data from.
- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file.
-- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed.
+- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed.
- **Contract Name**: Input the name of your contract.
-- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event.
+- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event.
- **Add another contract** (optional): You can add another contract.
-يرجى مراجعة الصورة المرفقة كمثال عن ما يمكن توقعه عند تهيئة غرافك الفرعي:
+See the following screenshot for an example for what to expect when initializing your Subgraph:

-### 4. Edit your subgraph
+### 4. Edit your Subgraph
-The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph.
+The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph.
-When making changes to the subgraph, you will mainly work with three files:
+When making changes to the Subgraph, you will mainly work with three files:
-- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index.
-- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph.
+- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index.
+- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph.
- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema.
-For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/).
+For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/).
-### 5. Deploy your subgraph
+### 5. Deploy your Subgraph
> Remember, deploying is not the same as publishing.
-When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
+When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
-عند كتابة غرافك الفرعي، قم بتنفيذ الأوامر التالية:
+Once your Subgraph is written, run the following commands:
````
```sh
@@ -94,7 +94,7 @@ graph codegen && graph build
```
````
-Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio.
+Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio.

@@ -109,37 +109,37 @@ graph deploy
The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`.
-### 6. Review your subgraph
+### 6. Review your Subgraph
-If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following:
+If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following:
- Run a sample query.
-- Analyze your subgraph in the dashboard to check information.
-- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this:
+- Analyze your Subgraph in the dashboard to check information.
+- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this:

-### 7. Publish your subgraph to The Graph Network
+### 7. Publish your Subgraph to The Graph Network
-When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following:
+When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following:
-- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network.
-- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/).
-- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it.
+- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network.
+- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/).
+- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it.
-> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph.
+> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph.
#### Publishing with Subgraph Studio
-To publish your subgraph, click the Publish button in the dashboard.
+To publish your Subgraph, click the Publish button in the dashboard.
-
+
-Select the network to which you would like to publish your subgraph.
+Select the network to which you would like to publish your Subgraph.
#### Publishing from the CLI
-As of version 0.73.0, you can also publish your subgraph with the Graph CLI.
+As of version 0.73.0, you can also publish your Subgraph with the Graph CLI.
Open the `graph-cli`.
@@ -157,32 +157,32 @@ graph publish
```
````
-3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice.
+3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice.

To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
-#### Adding signal to your subgraph
+#### Adding signal to your Subgraph
-1. To attract Indexers to query your subgraph, you should add GRT curation signal to it.
+1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it.
- - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph.
+ - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph.
2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount.
- - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks.
+ - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks.
To learn more about curation, read [Curating](/resources/roles/curating/).
-To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option:
+To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option:

-### 8. Query your subgraph
+### 8. Query your Subgraph
-You now have access to 100,000 free queries per month with your subgraph on The Graph Network!
+You now have access to 100,000 free queries per month with your Subgraph on The Graph Network!
-You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button.
+You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button.
-For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/).
+For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/).
diff --git a/website/src/pages/ar/substreams/developing/dev-container.mdx b/website/src/pages/ar/substreams/developing/dev-container.mdx
index bd4acf16eec7..339ddb159c87 100644
--- a/website/src/pages/ar/substreams/developing/dev-container.mdx
+++ b/website/src/pages/ar/substreams/developing/dev-container.mdx
@@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container.
It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file).
-Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling.
+Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling.
## Prerequisites
@@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea
You can configure your project to query data either through a Subgraph or directly from an SQL database:
-- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph).
+- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph).
- **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink).
## Deployment Options
diff --git a/website/src/pages/ar/substreams/developing/sinks.mdx b/website/src/pages/ar/substreams/developing/sinks.mdx
index 8a3a2eda4ff0..34d2f8624e7d 100644
--- a/website/src/pages/ar/substreams/developing/sinks.mdx
+++ b/website/src/pages/ar/substreams/developing/sinks.mdx
@@ -1,5 +1,5 @@
---
-title: Official Sinks
+title: Sink your Substreams
---
Choose a sink that meets your project's needs.
@@ -8,7 +8,7 @@ Choose a sink that meets your project's needs.
Once you find a package that fits your needs, you can choose how you want to consume the data.
-Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph.
+Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph.
## Sinks
diff --git a/website/src/pages/ar/substreams/developing/solana/account-changes.mdx b/website/src/pages/ar/substreams/developing/solana/account-changes.mdx
index 3e13301b042c..704443dee771 100644
--- a/website/src/pages/ar/substreams/developing/solana/account-changes.mdx
+++ b/website/src/pages/ar/substreams/developing/solana/account-changes.mdx
@@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu
> NOTE: History for the Solana Account Changes dates as of 2025, block 310629601.
-For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes).
+For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes).
> NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`.
diff --git a/website/src/pages/ar/substreams/developing/solana/transactions.mdx b/website/src/pages/ar/substreams/developing/solana/transactions.mdx
index b1b97cdcbfe5..ebdeeb98a931 100644
--- a/website/src/pages/ar/substreams/developing/solana/transactions.mdx
+++ b/website/src/pages/ar/substreams/developing/solana/transactions.mdx
@@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi
## Step 3: Load the Data
-To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink.
+To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink.
### Subgraph
1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions.
-2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`.
+2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`.
3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`.
### SQL
diff --git a/website/src/pages/ar/substreams/introduction.mdx b/website/src/pages/ar/substreams/introduction.mdx
index 774c2dfb90c2..ffb3f46baa62 100644
--- a/website/src/pages/ar/substreams/introduction.mdx
+++ b/website/src/pages/ar/substreams/introduction.mdx
@@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh
## Substreams Benefits
-- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing.
+- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing.
- **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara.
- **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections.
- **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database.
diff --git a/website/src/pages/ar/substreams/publishing.mdx b/website/src/pages/ar/substreams/publishing.mdx
index 0d3b7933820e..8ee05b0eda53 100644
--- a/website/src/pages/ar/substreams/publishing.mdx
+++ b/website/src/pages/ar/substreams/publishing.mdx
@@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s
### What is a package?
-A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs.
+A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs.
## Publish a Package
@@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data

-That's it! You have succesfully published a package in the Substreams registry.
+That's it! You have successfully published a package in the Substreams registry.

diff --git a/website/src/pages/ar/supported-networks.mdx b/website/src/pages/ar/supported-networks.mdx
index 09e56bdeb0c2..ac7050638264 100644
--- a/website/src/pages/ar/supported-networks.mdx
+++ b/website/src/pages/ar/supported-networks.mdx
@@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps
- Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints.
- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier.
-- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks.
+- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks.
- For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md).
## Running Graph Node locally
If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration.
-Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support.
+Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support.
diff --git a/website/src/pages/ar/token-api/_meta-titles.json b/website/src/pages/ar/token-api/_meta-titles.json
index 692cec84bd58..7ed31e0af95d 100644
--- a/website/src/pages/ar/token-api/_meta-titles.json
+++ b/website/src/pages/ar/token-api/_meta-titles.json
@@ -1,5 +1,6 @@
{
"mcp": "MCP",
"evm": "EVM Endpoints",
- "monitoring": "Monitoring Endpoints"
+ "monitoring": "Monitoring Endpoints",
+ "faq": "FAQ"
}
diff --git a/website/src/pages/ar/token-api/_meta.js b/website/src/pages/ar/token-api/_meta.js
index 09aa7ffc2649..0e526f673a66 100644
--- a/website/src/pages/ar/token-api/_meta.js
+++ b/website/src/pages/ar/token-api/_meta.js
@@ -5,4 +5,5 @@ export default {
mcp: titles.mcp,
evm: titles.evm,
monitoring: titles.monitoring,
+ faq: '',
}
diff --git a/website/src/pages/ar/token-api/faq.mdx b/website/src/pages/ar/token-api/faq.mdx
new file mode 100644
index 000000000000..8c1032894ddb
--- /dev/null
+++ b/website/src/pages/ar/token-api/faq.mdx
@@ -0,0 +1,109 @@
+---
+title: Token API FAQ
+---
+
+Get fast answers to easily integrate and scale with The Graph's high-performance Token API.
+
+## عام
+
+### What blockchains does the Token API support?
+
+Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One.
+
+### Why isn't my API key from The Graph Market working?
+
+Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key.
+
+### How current is the data provided by the API relative to the blockchain?
+
+The API provides data up to the latest finalized block.
+
+### How do I authenticate requests to the Token API?
+
+Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
+
+### Does the Token API provide a client SDK?
+
+While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol).
+
+### Are there plans to support additional blockchains in the future?
+
+Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol).
+
+### Are there plans to offer data closer to the chain head?
+
+Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol).
+
+### Are there plans to support additional use cases such as NFTs?
+
+The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
+
+## MCP / LLM / AI Topics
+
+### Is there a time limit for LLM queries?
+
+Yes. The maximum time limit for LLM queries is 10 seconds.
+
+### Is there a known list of LLMs that work with the API?
+
+Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server.
+
+Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter).
+
+### Where can I find the MCP client?
+
+You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client).
+
+## Advanced Topics
+
+### I'm getting 403/401 errors. What's wrong?
+
+Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market.
+
+### Are there rate limits or usage costs?\*\*
+
+During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+
+### What networks are supported, and how do I specify them?
+
+You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
+
+### Why do I only see 10 results? How can I get more data?
+
+Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
+
+### How do I fetch older transfer history?
+
+The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call.
+
+### What does an empty `"data": []` array mean?
+
+An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error.
+
+### Why is the JSON response wrapped in a `"data"` array?
+
+All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`).
+
+### Why are token amounts returned as strings?
+
+Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values.
+
+### What format should addresses be in?
+
+The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address.
+
+### Do I need special headers besides authentication?
+
+While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`).
+
+### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this?
+
+For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`.
+
+### Is the Token API part of The Graph's GraphQL service?
+
+No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints.
+
+### Do I need to use MCP or tools like Claude, Cline, or Cursor?
+
+No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required.
diff --git a/website/src/pages/ar/token-api/mcp/claude.mdx b/website/src/pages/ar/token-api/mcp/claude.mdx
index 0da8f2be031d..12a036b6fc24 100644
--- a/website/src/pages/ar/token-api/mcp/claude.mdx
+++ b/website/src/pages/ar/token-api/mcp/claude.mdx
@@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file.
```json label="claude_desktop_config.json"
{
"mcpServers": {
- "mcp-pinax": {
+ "token-api": {
"command": "npx",
"args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
"env": {
- "ACCESS_TOKEN": ""
+ "ACCESS_TOKEN": ""
}
}
}
diff --git a/website/src/pages/ar/token-api/mcp/cline.mdx b/website/src/pages/ar/token-api/mcp/cline.mdx
index ab54c0c8f6f0..ef98e45939fe 100644
--- a/website/src/pages/ar/token-api/mcp/cline.mdx
+++ b/website/src/pages/ar/token-api/mcp/cline.mdx
@@ -10,7 +10,7 @@ sidebarTitle: Cline
- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path.
- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version.
-
+
## Configuration
diff --git a/website/src/pages/ar/token-api/quick-start.mdx b/website/src/pages/ar/token-api/quick-start.mdx
index 4653c3d41ac6..c5fa07fa9371 100644
--- a/website/src/pages/ar/token-api/quick-start.mdx
+++ b/website/src/pages/ar/token-api/quick-start.mdx
@@ -1,6 +1,6 @@
---
title: Token API Quick Start
-sidebarTitle: Quick Start
+sidebarTitle: بداية سريعة
---

diff --git a/website/src/pages/cs/about.mdx b/website/src/pages/cs/about.mdx
index 256519660a73..1f43c663437f 100644
--- a/website/src/pages/cs/about.mdx
+++ b/website/src/pages/cs/about.mdx
@@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block
## The Graph Provides a Solution
-The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API.
+The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API.
Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process.
### How The Graph Functions
-Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
+Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
#### Specifics
-- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph.
+- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph.
-- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
+- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
-- When creating a subgraph, you need to write a subgraph manifest.
+- When creating a Subgraph, you need to write a Subgraph manifest.
-- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph.
+- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph.
-The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions.
+The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions.

@@ -56,12 +56,12 @@ Průběh se řídí těmito kroky:
1. Dapp přidává data do Ethereum prostřednictvím transakce na chytrém kontraktu.
2. Chytrý smlouva vysílá při zpracování transakce jednu nebo více událostí.
-3. Uzel grafu neustále vyhledává nové bloky Ethereum a data pro váš podgraf, která mohou obsahovat.
-4. Uzel grafu v těchto blocích vyhledá události Etherea pro váš podgraf a spustí vámi zadané mapovací obsluhy. Mapování je modul WASM, který vytváří nebo aktualizuje datové entity, které Uzel grafu ukládá v reakci na události Ethereum.
+3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain.
+4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events.
5. Aplikace dapp se dotazuje grafického uzlu na data indexovaná z blockchainu pomocí [GraphQL endpoint](https://graphql.org/learn/). Uzel Grafu zase překládá dotazy GraphQL na dotazy pro své podkladové datové úložiště, aby tato data načetl, přičemž využívá indexovací schopnosti úložiště. Dapp tato data zobrazuje v bohatém UI pro koncové uživatele, kteří je používají k vydávání nových transakcí na platformě Ethereum. Cyklus se opakuje.
## Další kroky
-The following sections provide a more in-depth look at subgraphs, their deployment and data querying.
+The following sections provide a more in-depth look at Subgraphs, their deployment and data querying.
-Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data.
+Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data.
diff --git a/website/src/pages/cs/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/cs/archived/arbitrum/arbitrum-faq.mdx
index 050d1a0641aa..df47adfff704 100644
--- a/website/src/pages/cs/archived/arbitrum/arbitrum-faq.mdx
+++ b/website/src/pages/cs/archived/arbitrum/arbitrum-faq.mdx
@@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from:
- Zabezpečení zděděné po Ethereum
-Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas.
+Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas.
Komunita Graf se v loňském roce rozhodla pokračovat v Arbitrum po výsledku diskuze [GIP-0031] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305).
@@ -39,7 +39,7 @@ Pro využití výhod používání a Graf na L2 použijte rozevírací přepína

-## Jako vývojář podgrafů, Spotřebitel dat, indexer, kurátor, nebo delegátor, co mám nyní udělat?
+## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now?
Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support.
@@ -51,9 +51,9 @@ Všechny chytré smlouvy byly důkladně [auditovány](https://github.com/graphp
Vše bylo důkladně otestováno, a je připraven pohotovostní plán, který zajistí bezpečný a bezproblémový přechod. Podrobnosti naleznete [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20).
-## Are existing subgraphs on Ethereum working?
+## Are existing Subgraphs on Ethereum working?
-All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly.
+All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly.
## Does GRT have a new smart contract deployed on Arbitrum?
diff --git a/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-faq.mdx
index 88e1d9e632a2..439e83f3864b 100644
--- a/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-faq.mdx
+++ b/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-faq.mdx
@@ -24,9 +24,9 @@ Výjimkou jsou peněženky s chytrými smlouvami, jako je multisigs: jedná se o
Nástroje pro přenos L2 používají k odesílání zpráv z L1 do L2 nativní mechanismus Arbitrum. Tento mechanismus se nazývá 'retryable ticket,' a všechny nativní tokenové můstky, včetně můstku Arbitrum GRT, ho používají. Další informace o opakovatelných ticketch naleznete v části [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging).
-Při přenosu aktiv (podgraf, podíl, delegace nebo kurátorství) do L2 se odešle zpráva přes můstek Arbitrum GRT, která vytvoří opakovatelný tiket v L2. Nástroj pro převod zahrnuje v transakci určitou hodnotu ETH, která se použije na 1) zaplacení vytvoření tiketu a 2) zaplacení plynu pro provedení tiketu v L2. Se však ceny plynu mohou v době, než je ticket připraven k provedení v režimu L2, měnit. Je možné, že se tento pokus o automatické provedení nezdaří. Když se tak stane, most Arbitrum udrží opakovatelný tiket naživu až 7 dní a kdokoli se může pokusit o jeho "vykoupení" (což vyžaduje peněženku s určitým množstvím ETH propojenou s mostem Arbitrum).
+When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum).
-Tomuto kroku říkáme 'Potvrzení' ve všech nástrojích pro přenos - ve většině případů se spustí automaticky, protože automatické provedení je většinou úspěšné, ale je důležité, abyste se ujistili, že proběhlo. Pokud se to nepodaří a během 7 dnů nedojde k žádnému úspěšnému opakování, můstek Arbitrum tiket zahodí a vaše aktiva (podgraf, podíl, delegace nebo kurátorství) budou ztracena a nebude možné je obnovit. Vývojáři The Graph jádra mají k dispozici monitorovací systém, který tyto situace odhaluje a snaží se lístky uplatnit dříve, než bude pozdě, ale v konečném důsledku je vaší odpovědností zajistit, aby byl váš přenos dokončen včas. Pokud máte potíže s potvrzením transakce, obraťte se na nás pomocí [tohoto formuláře](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) a hlavní vývojáři vám pomohou.
+This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you.
### Zahájil jsem přenos delegace/podílů/kurátorství a nejsem si jistý, zda se to dostalo do L2. Jak mohu potvrdit, že to bylo přeneseno správně?
@@ -36,43 +36,43 @@ Pokud máte k dispozici hash transakce L1 (který zjistíte, když se podíváte
## Podgraf přenos
-### Jak mohu přenést svůj podgraf?
+### How do I transfer my Subgraph?
-Chcete-li přenést svůj podgraf, musíte provést následující kroky:
+To transfer your Subgraph, you will need to complete the following steps:
1. Zahájení převodu v mainnet Ethereum
2. Počkejte 20 minut na potvrzení
-3. Potvrzení přenosu podgrafů na Arbitrum\*
+3. Confirm Subgraph transfer on Arbitrum\*
-4. Úplné zveřejnění podgrafu na arbitrum
+4. Finish publishing Subgraph on Arbitrum
5. Aktualizovat adresu URL dotazu (doporučeno)
-\*Upozorňujeme, že převod musíte potvrdit do 7 dnů, jinak může dojít ke ztrátě vašeho podgrafu. Ve většině případů se tento krok provede automaticky, ale v případě prudkého nárůstu cen plynu na Arbitru může být nutné ruční potvrzení. Pokud se během tohoto procesu vyskytnou nějaké problémy, budou k dispozici zdroje, které vám pomohou: kontaktujte podporu na adrese support@thegraph.com nebo na [Discord](https://discord.gg/graphprotocol).
+\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol).
### Odkud mám iniciovat převod?
-Přenos můžete zahájit v [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) nebo na libovolné stránce s detaily subgrafu. "Kliknutím na tlačítko 'Transfer Subgraph' na stránce s podrobnostmi o podgrafu zahájíte přenos.
+You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer.
-### Jak dlouho musím čekat, než bude můj podgraf přenesen
+### How long do I need to wait until my Subgraph is transferred
Přenos trvá přibližně 20 minut. Most Arbitrum pracuje na pozadí a automaticky dokončí přenos mostu. V některých případech může dojít ke zvýšení nákladů na plyn a transakci bude nutné potvrdit znovu.
-### Bude můj podgraf zjistitelný i poté, co jej přenesu do L2?
+### Will my Subgraph still be discoverable after I transfer it to L2?
-Váš podgraf bude zjistitelný pouze v síti, ve které je publikován. Pokud se například váš subgraf nachází na Arbitrum One, pak jej najdete pouze v Průzkumníku na Arbitrum One a na Ethereum jej nenajdete. Ujistěte se, že máte v přepínači sítí v horní části stránky vybranou možnost Arbitrum One, abyste se ujistili, že jste ve správné síti. Po přenosu se podgraf L1 zobrazí jako zastaralý.
+Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network. After the transfer, the L1 Subgraph will appear as deprecated.
-### Musí být můj podgraf zveřejněn, abych ho mohl přenést?
+### Does my Subgraph need to be published to transfer it?
-Abyste mohli využít nástroj pro přenos subgrafů, musí být váš subgraf již zveřejněn v mainnet Ethereum a musí mít nějaký kurátorský signál vlastněný peněženkou, která subgraf vlastní. Pokud váš subgraf není zveřejněn, doporučujeme vám jednoduše publikovat přímo na Arbitrum One - související poplatky za plyn budou podstatně nižší. Pokud chcete přenést publikovaný podgraf, ale účet vlastníka na něm nemá kurátorský signál, můžete z tohoto účtu signalizovat malou částku (např. 1 GRT); nezapomeňte zvolit "auto-migrating" signál.
+To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal.
-### Co se stane s verzí mého subgrafu na ethereum mainnet po převodu na Arbitrum?
+### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum?
-Po převedení vašeho subgrafu na Arbitrum bude verze mainnet Ethereum zastaralá. Doporučujeme vám aktualizovat adresu URL dotazu do 48 hodin. Je však zavedena ochranná lhůta, která udržuje adresu URL mainnet funkční, aby bylo možné aktualizovat podporu dapp třetích stran.
+After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated.
### Musím po převodu také znovu publikovat na Arbitrum?
@@ -80,21 +80,21 @@ Po uplynutí 20minutového okna pro převod budete muset převod potvrdit transa
### Dojde při opětovném publikování k výpadku mého koncového bodu?
-Je nepravděpodobné, ale je možné, že dojde ke krátkému výpadku v závislosti na tom, které indexátory podporují podgraf na L1 a zda jej indexují, dokud není podgraf plně podporován na L2.
+It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2.
### Je publikování a verzování na L2 stejné jako na mainnet Ethereum Ethereum?
-Ano. Při publikování v aplikaci Subgraph Studio vyberte jako publikovanou síť Arbitrum One. Ve Studiu bude k dispozici nejnovější koncový bod, který odkazuje na nejnovější aktualizovanou verzi podgrafu.
+Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph.
-### Bude se kurátorství mého podgrafu pohybovat spolu s mým podgrafem?
+### Will my Subgraph's curation move with my Subgraph?
-Pokud jste zvolili automatickou migraci signálu, 100 % vaší vlastní kurátorství se přesune spolu s vaším subgrafem do Arbitrum One. Veškerý signál kurátorství podgrafu bude v okamžiku převodu převeden na GRT a GRT odpovídající vašemu signálu kurátorství bude použit k ražbě signálu na podgrafu L2.
+If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph.
-Ostatní kurátoři se mohou rozhodnout, zda stáhnou svou část GRT, nebo ji také převedou na L2, aby vyrazili signál na stejném podgraf.
+Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph.
-### Mohu svůj subgraf po převodu přesunout zpět do mainnet Ethereum?
+### Can I move my Subgraph back to Ethereum mainnet after I transfer?
-Po přenosu bude vaše verze tohoto podgrafu v síti Ethereum mainnet zneplatněna. Pokud se chcete přesunout zpět do mainnetu, musíte provést nové nasazení a publikovat zpět do mainnet. Převod zpět do mainnetu Etherea se však důrazně nedoporučuje, protože odměny za indexování budou nakonec distribuovány výhradně na Arbitrum One.
+Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One.
### Proč potřebuji k dokončení převodu překlenovací ETH?
@@ -206,19 +206,19 @@ Chcete-li přenést své kurátorství, musíte provést následující kroky:
\*Pokud je to nutné - tj. používáte smluvní adresu.
-### Jak se dozvím, že se mnou kurátorovaný podgraf přesunul do L2?
+### How will I know if the Subgraph I curated has moved to L2?
-Při zobrazení stránky s podrobnostmi podgrafu se zobrazí banner s upozorněním, že tento podgraf byl přenesen. Můžete následovat výzvu k přenosu kurátorství. Tyto informace najdete také na stránce s podrobnostmi o podgrafu, který se přesunul.
+When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved.
### Co když si nepřeji přesunout své kurátorství do L2?
-Pokud je podgraf vyřazen, máte možnost stáhnout svůj signál. Stejně tak pokud se podgraf přesunul do L2, můžete si vybrat, zda chcete stáhnout svůj signál v mainnet Ethereum, nebo signál poslat do L2.
+When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2.
### Jak poznám, že se moje kurátorství úspěšně přeneslo?
Podrobnosti o signálu budou k dispozici prostřednictvím Průzkumníka přibližně 20 minut po spuštění nástroje pro přenos L2.
-### Mohu přenést své kurátorství na více než jeden podgraf najednou?
+### Can I transfer my curation on more than one Subgraph at a time?
V současné době není k dispozici možnost hromadného přenosu.
@@ -266,7 +266,7 @@ Nástroj pro převod L2 dokončí převod vašeho podílu přibližně za 20 min
### Musím před převodem svého podílu indexovat na Arbitrum?
-Před nastavením indexování můžete nejprve efektivně převést svůj podíl, ale nebudete si moci nárokovat žádné odměny na L2, dokud nepřidělíte podgrafy na L2, neindexujete je a nepředložíte POIs.
+You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs.
### Mohou delegáti přesunout svou delegaci dříve, než přesunu svůj indexovací podíl?
diff --git a/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-guide.mdx
index 69717e46ed39..94b78981db6b 100644
--- a/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-guide.mdx
+++ b/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-guide.mdx
@@ -6,53 +6,53 @@ Graph usnadnil přechod na úroveň L2 v Arbitrum One. Pro každého účastník
Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them.
-## Jak přenést podgraf do Arbitrum (L2)
+## How to transfer your Subgraph to Arbitrum (L2)
-## Výhody přenosu podgrafů
+## Benefits of transferring your Subgraphs
Komunita a hlavní vývojáři Graphu se v uplynulém roce [připravovali](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) na přechod na Arbitrum. Arbitrum, blockchain druhé vrstvy neboli "L2", zdědil bezpečnost po Ethereum, ale poskytuje výrazně nižší poplatky za plyn.
-Když publikujete nebo aktualizujete svůj subgraf v síti The Graph Network, komunikujete s chytrými smlouvami na protokolu, což vyžaduje platbu za plyn pomocí ETH. Přesunutím subgrafů do Arbitrum budou veškeré budoucí aktualizace subgrafů vyžadovat mnohem nižší poplatky za plyn. Nižší poplatky a skutečnost, že křivky vazby kurátorů na L2 jsou ploché, také usnadňují ostatním kurátorům kurátorství na vašem podgrafu, což zvyšuje odměny pro indexátory na vašem podgrafu. Toto prostředí s nižšími náklady také zlevňuje indexování a obsluhu subgrafu pro indexátory. Odměny za indexování se budou v následujících měsících na Arbitrum zvyšovat a na mainnetu Ethereum snižovat, takže stále více indexerů bude převádět své podíly a zakládat své operace na L2.
+When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2.
-## Porozumění tomu, co se děje se signálem, podgrafem L1 a adresami URL dotazů
+## Understanding what happens with signal, your L1 Subgraph and query URLs
-Při přenosu podgrafu do Arbitrum se používá můstek Arbitrum GRT, který zase používá nativní můstek Arbitrum k odeslání podgrafu do L2. Při "přenosu" se subgraf v mainnetu znehodnotí a odešlou se informace pro opětovné vytvoření subgrafu v L2 pomocí mostu. Zahrnuje také GRT vlastníka podgrafu, který již byl signalizován a který musí být větší než nula, aby most převod přijal.
+Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer.
-Pokud zvolíte převod podgrafu, převede se veškerý signál kurátoru podgrafu na GRT. To je ekvivalentní "znehodnocení" podgrafu v síti mainnet. GRT odpovídající vašemu kurátorství budou spolu s podgrafem odeslány na L2, kde budou vaším jménem použity k ražbě signálu.
+When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf.
-Ostatní kurátoři se mohou rozhodnout, zda si stáhnou svůj podíl GRT, nebo jej také převedou na L2, aby na stejném podgrafu vyrazili signál. Pokud vlastník podgrafu nepřevede svůj podgraf na L2 a ručně jej znehodnotí prostřednictvím volání smlouvy, pak budou Kurátoři upozorněni a budou moci stáhnout svou kurátorskou funkci.
+Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation.
-Jakmile je podgraf převeden, protože veškerá kurátorská činnost je převedena na GRT, indexátoři již nebudou dostávat odměny za indexování podgrafu. Budou však existovat indexátory, které 1) budou obsluhovat převedené podgrafy po dobu 24 hodin a 2) okamžitě začnou indexovat podgraf na L2. Protože tyto Indexery již mají podgraf zaindexovaný, nemělo by být nutné čekat na synchronizaci podgrafu a bude možné se na podgraf na L2 dotazovat téměř okamžitě.
+As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately.
-Dotazy do podgrafu L2 bude nutné zadávat na jinou adresu URL (na `arbitrum-gateway.thegraph.com`), ale adresa URL L1 bude fungovat nejméně 48 hodin. Poté bude brána L1 přeposílat dotazy na bránu L2 (po určitou dobu), což však zvýší latenci, takže se doporučuje co nejdříve přepnout všechny dotazy na novou adresu URL.
+Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible.
## Výběr peněženky L2
-Když jste publikovali svůj podgraf na hlavní síti (mainnet), použili jste připojenou peněženku, která vlastní NFT reprezentující tento podgraf a umožňuje vám publikovat aktualizace.
+When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates.
-Při přenosu podgrafu do Arbitrum si můžete vybrat jinou peněženku, která bude vlastnit tento podgraf NFT na L2.
+When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2.
Pokud používáte "obyčejnou" peněženku, jako je MetaMask (externě vlastněný účet nebo EOA, tj. peněženka, která není chytrým kontraktem), pak je to volitelné a doporučuje se zachovat stejnou adresu vlastníka jako v L1.
-Pokud používáte peněženku s chytrým kontraktem, jako je multisig (např. Trezor), pak je nutné zvolit jinou adresu peněženky L2, protože je pravděpodobné, že tento účet existuje pouze v mainnetu a nebudete moci provádět transakce na Arbitrum pomocí této peněženky. Pokud chcete i nadále používat peněženku s chytrým kontraktem nebo multisig, vytvořte si na Arbitrum novou peněženku a její adresu použijte jako vlastníka L2 svého subgrafu.
+If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph.
-**Je velmi důležité používat adresu peněženky, kterou máte pod kontrolou a která může provádět transakce na Arbitrum. V opačném případě bude podgraf ztracen a nebude možné jej obnovit.**
+**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.**
## Příprava na převod: přemostění některých ETH
-Přenos podgrafu zahrnuje odeslání transakce přes můstek a následné provedení další transakce na Arbitrum. První transakce využívá ETH na mainnetu a obsahuje nějaké ETH na zaplacení plynu, když je zpráva přijata na L2. Pokud však tento plyn nestačí, je třeba transakci zopakovat a zaplatit za plyn přímo na L2 (to je 'Krok 3: Potvrzení převodu' níže). Tento krok musí být proveden do 7 dnů od zahájení převodu\*\*. Druhá transakce ('Krok 4: Dokončení převodu na L2') bude navíc provedena přímo na Arbitrum. Z těchto důvodů budete potřebovat nějaké ETH na peněžence Arbitrum. Pokud používáte multisig nebo smart contract účet, ETH bude muset být v běžné peněžence (EOA), kterou používáte k provádění transakcí, nikoli na samotné multisig peněžence.
+Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself.
ETH si můžete koupit na některých burzách a vybrat přímo na Arbitrum, nebo můžete použít most Arbitrum a poslat ETH z peněženky mainnetu na L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Vzhledem k tomu, že poplatky za plyn na Arbitrum jsou nižší, mělo by vám stačit jen malé množství. Doporučujeme začít na nízkém prahu (např. 0.01 ETH), aby byla vaše transakce schválena.
-## Hledání nástroje pro přenos podgrafu
+## Finding the Subgraph Transfer Tool
-Nástroj pro přenos L2 najdete při prohlížení stránky svého podgrafu v aplikaci Subgraph Studio:
+You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio:

-Je k dispozici také v Průzkumníku, pokud jste připojeni k peněžence, která vlastní podgraf, a na stránce tohoto podgrafu v Průzkumníku:
+It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer:

@@ -60,19 +60,19 @@ Kliknutím na tlačítko Přenést na L2 otevřete nástroj pro přenos, kde mů
## Krok 1: Zahájení přenosu
-Před zahájením převodu se musíte rozhodnout, která adresa bude vlastnit podgraf na L2 (viz výše "Výběr peněženky L2"), a důrazně doporučujeme mít na Arbitrum již přemostěné ETH pro plyn (viz výše "Příprava na převod: přemostění některých ETH").
+Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above).
-Vezměte prosím na vědomí, že přenos podgrafu vyžaduje nenulové množství signálu na podgrafu se stejným účtem, který vlastní podgraf; pokud jste na podgrafu nesignalizovali, budete muset přidat trochu kurátorství (stačí přidat malé množství, například 1 GRT).
+Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice).
-Po otevření nástroje Transfer Tool budete moci do pole "Receiving wallet address" zadat adresu peněženky L2 - **ujistěte se, že jste zadali správnou adresu**. Kliknutím na Transfer Subgraph budete vyzváni k provedení transakce na vaší peněžence (všimněte si, že je zahrnuta určitá hodnota ETH, abyste zaplatili za plyn L2); tím se zahájí přenos a znehodnotí váš subgraf L1 (více podrobností o tom, co se děje v zákulisí, najdete výše v části "Porozumění tomu, co se děje se signálem, vaším subgrafem L1 a URL dotazů").
+After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes).
-Pokud tento krok provedete, ujistěte se, že jste pokračovali až do dokončení kroku 3 za méně než 7 dní, jinak se podgraf a váš signál GRT ztratí. To je způsobeno tím, jak funguje zasílání zpráv L1-L2 na Arbitrum: zprávy, které jsou zasílány přes most, jsou "Opakovatelný tiket", které musí být provedeny do 7 dní, a počáteční provedení může vyžadovat opakování, pokud dojde ke skokům v ceně plynu na Arbitrum.
+If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum.

-## Krok 2: Čekání, až se podgraf dostane do L2
+## Step 2: Waiting for the Subgraph to get to L2
-Po zahájení přenosu se musí zpráva, která odesílá podgraf L1 do L2, šířit přes můstek Arbitrum. To trvá přibližně 20 minut (můstek čeká, až bude blok mainnetu obsahující transakci "bezpečný" před případnými reorgy řetězce).
+After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs).
Po uplynutí této čekací doby se Arbitrum pokusí o automatické provedení přenosu na základě smluv L2.
@@ -80,7 +80,7 @@ Po uplynutí této čekací doby se Arbitrum pokusí o automatické provedení p
## Krok 3: Potvrzení převodu
-Ve většině případů se tento krok provede automaticky, protože plyn L2 obsažený v kroku 1 by měl stačit k provedení transakce, která přijímá podgraf na smlouvách Arbitrum. V některých případech je však možné, že prudký nárůst cen plynu na Arbitrum způsobí selhání tohoto automatického provedení. V takovém případě bude "ticket", který odešle subgraf na L2, čekat na vyřízení a bude vyžadovat opakování pokusu do 7 dnů.
+In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days.
V takovém případě se musíte připojit pomocí peněženky L2, která má nějaké ETH na Arbitrum, přepnout síť peněženky na Arbitrum a kliknutím na "Confirm Transfer" zopakovat transakci.
@@ -88,33 +88,33 @@ V takovém případě se musíte připojit pomocí peněženky L2, která má n
## Krok 4: Dokončení přenosu na L2
-V tuto chvíli byly váš podgraf a GRT přijaty na Arbitrum, ale podgraf ještě není zveřejněn. Budete se muset připojit pomocí peněženky L2, kterou jste si vybrali jako přijímající peněženku, přepnout síť peněženky na Arbitrum a kliknout na "Publikovat subgraf"
+At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph."
-
+
-
+
-Tím se podgraf zveřejní, aby jej mohly začít obsluhovat indexery pracující na Arbitrum. Rovněž bude zminován kurátorský signál pomocí GRT, které byly přeneseny z L1.
+This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1.
## Krok 5: Aktualizace URL dotazu
-Váš podgraf byl úspěšně přenesen do Arbitrum! Chcete-li se na podgraf zeptat, nová URL bude:
+Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be :
`https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]`
-Všimněte si, že ID podgrafu v Arbitrum bude jiné než to, které jste měli v mainnetu, ale vždy ho můžete najít v Průzkumníku nebo Studiu. Jak je uvedeno výše (viz "Pochopení toho, co se děje se signálem, vaším subgrafem L1 a URL dotazů"), stará URL adresa L1 bude po krátkou dobu podporována, ale jakmile bude subgraf synchronizován na L2, měli byste své dotazy přepnout na novou adresu.
+Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2.
## Jak přenést kurátorství do služby Arbitrum (L2)
-## Porozumění tomu, co se děje s kurátorstvím při přenosu podgrafů do L2
+## Understanding what happens to curation on Subgraph transfers to L2
-Když vlastník podgrafu převede podgraf do Arbitrum, je veškerý signál podgrafu současně převeden na GRT. To se týká "automaticky migrovaného" signálu, tj. signálu, který není specifický pro verzi podgrafu nebo nasazení, ale který následuje nejnovější verzi podgrafu.
+When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph.
-Tento převod ze signálu na GRT je stejný, jako kdyby vlastník podgrafu zrušil podgraf v L1. Při depreciaci nebo převodu subgrafu se současně "spálí" veškerý kurátorský signál (pomocí kurátorské vazební křivky) a výsledný GRT je držen inteligentním kontraktem GNS (tedy kontraktem, který se stará o upgrade subgrafu a automatickou migraci signálu). Každý kurátor na tomto subgrafu má tedy nárok na tento GRT úměrný množství podílů, které měl na subgrafu.
+This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph.
-Část těchto GRT odpovídající vlastníkovi podgrafu je odeslána do L2 spolu s podgrafem.
+A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph.
-V tomto okamžiku se za kurátorský GRT již nebudou účtovat žádné poplatky za dotazování, takže kurátoři se mohou rozhodnout, zda svůj GRT stáhnou, nebo jej přenesou do stejného podgrafu na L2, kde může být použit k ražbě nového kurátorského signálu. S tímto úkonem není třeba spěchat, protože GRT lze pomáhat donekonečna a každý dostane částku úměrnou svému podílu bez ohledu na to, kdy tak učiní.
+At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it.
## Výběr peněženky L2
@@ -130,9 +130,9 @@ Pokud používáte peněženku s chytrým kontraktem, jako je multisig (např. T
Před zahájením převodu se musíte rozhodnout, která adresa bude vlastnit kurátorství na L2 (viz výše "Výběr peněženky L2"), a doporučujeme mít nějaké ETH pro plyn již přemostěné na Arbitrum pro případ, že byste potřebovali zopakovat provedení zprávy na L2. ETH můžete nakoupit na některých burzách a vybrat si ho přímo na Arbitrum, nebo můžete použít Arbitrum bridge pro odeslání ETH z peněženky mainnetu na L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - protože poplatky za plyn na Arbitrum jsou tak nízké, mělo by vám stačit jen malé množství, např. 0,01 ETH bude pravděpodobně více než dostačující.
-Pokud byl podgraf, do kterého kurátor provádí kurátorství, převeden do L2, zobrazí se v Průzkumníku zpráva, že kurátorství provádíte do převedeného podgrafu.
+If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph.
-Při pohledu na stránku podgrafu můžete zvolit stažení nebo přenos kurátorství. Kliknutím na "Přenést signál do Arbitrum" otevřete nástroj pro přenos.
+When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool.

@@ -162,4 +162,4 @@ V takovém případě se musíte připojit pomocí peněženky L2, která má n
## Odstranění vašeho kurátorství na L1
-Pokud nechcete posílat GRT na L2 nebo byste raději překlenuli GRT ručně, můžete si na L1 stáhnout svůj kurátorovaný GRT. Na banneru na stránce podgrafu zvolte "Withdraw Signal" a potvrďte transakci; GRT bude odeslán na vaši adresu kurátora.
+If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address.
diff --git a/website/src/pages/cs/archived/sunrise.mdx b/website/src/pages/cs/archived/sunrise.mdx
index 71b86ac159ff..52e8c90d7708 100644
--- a/website/src/pages/cs/archived/sunrise.mdx
+++ b/website/src/pages/cs/archived/sunrise.mdx
@@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ
## Jaký byl úsvit decentralizovaných dat?
-Úsvit decentralizovaných dat byla iniciativa, kterou vedla společnost Edge & Node. Tato iniciativa umožnila vývojářům podgrafů bezproblémově přejít na decentralizovanou síť Graf.
+The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly.
-This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs.
+This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs.
### Co se stalo s hostovanou službou?
-Koncové body dotazů hostované služby již nejsou k dispozici a vývojáři nemohou v hostované službě nasadit nové podgrafy.
+The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service.
-Během procesu aktualizace mohli vlastníci podgrafů hostovaných služeb aktualizovat své podgrafy na síť Graf. Vývojáři navíc mohli nárokovat automatickou aktualizaci podgrafů.
+During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs.
### Měla tato aktualizace vliv na Podgraf Studio?
Ne, na Podgraf Studio neměl Sunrise vliv. Podgrafy byly okamžitě k dispozici pro dotazování, a to díky aktualizačnímu indexeru, který využívá stejnou infrastrukturu jako hostovaná služba.
-### Proč byly podgrafy zveřejněny na Arbitrum, začalo indexovat jinou síť?
+### Why were Subgraphs published to Arbitrum, did it start indexing a different network?
-The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/)
+The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/)
## O Upgrade Indexer
> Aktualizace Indexer je v současné době aktivní.
-Upgrade Indexer byl implementován za účelem zlepšení zkušeností s upgradem podgrafů z hostované služby do sit' Graf a podpory nových verzí stávajících podgrafů, které dosud nebyly indexovány.
+The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed.
### Co dělá upgrade Indexer?
-- Zavádí řetězce, které ještě nezískaly odměnu za indexaci v síti Graf, a zajišťuje, aby byl po zveřejnění podgrafu co nejrychleji k dispozici indexátor pro obsluhu dotazů.
+- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published.
- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/).
-- Indexátoři, kteří provozují upgrade indexátoru, tak činí jako veřejnou službu pro podporu nových podgrafů a dalších řetězců, kterým chybí indexační odměny, než je Rada grafů schválí.
+- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them.
### Proč Edge & Node spouští aktualizaci Indexer?
-Edge & Node historicky udržovaly hostovanou službu, a proto již mají synchronizovaná data pro podgrafy hostované služby.
+Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs.
### Co znamená upgrade indexeru pro stávající indexery?
Řetězce, které byly dříve podporovány pouze v hostované službě, byly vývojářům zpřístupněny v síti Graf nejprve bez odměn za indexování.
-Tato akce však uvolnila poplatky za dotazy pro všechny zájemce o indexování a zvýšila počet podgrafů zveřejněných v síti Graf. V důsledku toho mají indexátoři více příležitostí indexovat a obsluhovat tyto podgrafy výměnou za poplatky za dotazy, a to ještě předtím, než jsou odměny za indexování pro řetězec povoleny.
+However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain.
-Upgrade Indexer také poskytuje komunitě Indexer informace o potenciální poptávce po podgrafech a nových řetězcích v síti grafů.
+The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network.
### Co to znamená pro delegáti?
-The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity.
+The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity.
### Did the upgrade Indexer compete with existing Indexers for rewards?
-No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards.
+No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards.
-It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs.
+It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs.
-### How does this affect subgraph developers?
+### How does this affect Subgraph developers?
-Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade.
+Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade.
### How does the upgrade Indexer benefit data consumers?
@@ -71,10 +71,10 @@ Aktualizace Indexeru umožňuje podporu blockchainů v síti, které byly dřív
The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market.
-### When will the upgrade Indexer stop supporting a subgraph?
+### When will the upgrade Indexer stop supporting a Subgraph?
-The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it.
+The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it.
-Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days.
+Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days.
-Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it.
+Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it.
diff --git a/website/src/pages/cs/global.json b/website/src/pages/cs/global.json
index c431472eb4f5..59211940d133 100644
--- a/website/src/pages/cs/global.json
+++ b/website/src/pages/cs/global.json
@@ -6,6 +6,7 @@
"subgraphs": "Podgrafy",
"substreams": "Substreams",
"sps": "Substreams-Powered Subgraphs",
+ "tokenApi": "Token API",
"indexing": "Indexing",
"resources": "Resources",
"archived": "Archived"
@@ -24,9 +25,51 @@
"linkToThisSection": "Link to this section"
},
"content": {
- "note": "Note",
+ "callout": {
+ "note": "Note",
+ "tip": "Tip",
+ "important": "Important",
+ "warning": "Warning",
+ "caution": "Caution"
+ },
"video": "Video"
},
+ "openApi": {
+ "parameters": {
+ "pathParameters": "Path Parameters",
+ "queryParameters": "Query Parameters",
+ "headerParameters": "Header Parameters",
+ "cookieParameters": "Cookie Parameters",
+ "parameter": "Parameter",
+ "description": "Popis",
+ "value": "Value",
+ "required": "Required",
+ "deprecated": "Deprecated",
+ "defaultValue": "Default value",
+ "minimumValue": "Minimum value",
+ "maximumValue": "Maximum value",
+ "acceptedValues": "Accepted values",
+ "acceptedPattern": "Accepted pattern",
+ "format": "Format",
+ "serializationFormat": "Serialization format"
+ },
+ "request": {
+ "label": "Test this endpoint",
+ "noCredentialsRequired": "No credentials required",
+ "send": "Send Request"
+ },
+ "responses": {
+ "potentialResponses": "Potential Responses",
+ "status": "Status",
+ "description": "Popis",
+ "liveResponse": "Live Response",
+ "example": "Příklad"
+ },
+ "errors": {
+ "invalidApi": "Could not retrieve API {0}.",
+ "invalidOperation": "Could not retrieve operation {0} in API {1}."
+ }
+ },
"notFound": {
"title": "Oops! This page was lost in space...",
"subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.",
diff --git a/website/src/pages/cs/index.json b/website/src/pages/cs/index.json
index dd7566b56c2e..545b2b717b56 100644
--- a/website/src/pages/cs/index.json
+++ b/website/src/pages/cs/index.json
@@ -7,7 +7,7 @@
"cta2": "Build your first subgraph"
},
"products": {
- "title": "The Graph’s Products",
+ "title": "The Graph's Products",
"description": "Choose a solution that fits your needs—interact with blockchain data your way.",
"subgraphs": {
"title": "Podgrafy",
@@ -21,7 +21,7 @@
},
"sps": {
"title": "Substreams-Powered Subgraphs",
- "description": "Boost your subgraph’s efficiency and scalability by using Substreams.",
+ "description": "Boost your subgraph's efficiency and scalability by using Substreams.",
"cta": "Set up a Substreams-powered subgraph"
},
"graphNode": {
@@ -39,12 +39,12 @@
"title": "Podporované sítě",
"details": "Network Details",
"services": "Services",
- "type": "Type",
+ "type": "Typ",
"protocol": "Protocol",
"identifier": "Identifier",
"chainId": "Chain ID",
"nativeCurrency": "Native Currency",
- "docs": "Docs",
+ "docs": "Dokumenty",
"shortName": "Short Name",
"guides": "Guides",
"search": "Search networks",
@@ -67,7 +67,7 @@
"tableHeaders": {
"name": "Name",
"id": "ID",
- "subgraphs": "Subgraphs",
+ "subgraphs": "Podgrafy",
"substreams": "Substreams",
"firehose": "Firehose",
"tokenapi": "Token API"
@@ -92,7 +92,7 @@
"description": "Leverage features like custom data sources, event handlers, and topic filters."
},
"billing": {
- "title": "Billing",
+ "title": "Fakturace",
"description": "Optimize costs and manage billing efficiently."
}
},
@@ -156,15 +156,15 @@
"watchOnYouTube": "Watch on YouTube",
"theGraphExplained": {
"title": "The Graph Explained In 1 Minute",
- "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video."
+ "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video."
},
"whatIsDelegating": {
"title": "What is Delegating?",
- "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating."
+ "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph."
},
"howToIndexSolana": {
"title": "How to Index Solana with a Substreams-powered Subgraph",
- "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph."
+ "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases."
}
},
"time": {
diff --git a/website/src/pages/cs/indexing/chain-integration-overview.mdx b/website/src/pages/cs/indexing/chain-integration-overview.mdx
index e048421d7ad9..a2f1eed58864 100644
--- a/website/src/pages/cs/indexing/chain-integration-overview.mdx
+++ b/website/src/pages/cs/indexing/chain-integration-overview.mdx
@@ -36,7 +36,7 @@ Tento proces souvisí se službou Datová služba podgrafů a vztahuje se pouze
### 2. Co se stane, když podpora Firehose & Substreams přijde až poté, co bude síť podporována v mainnet?
-To by mělo vliv pouze na podporu protokolu pro indexování odměn na podgrafech s podsílou. Novou implementaci Firehose by bylo třeba testovat v testnetu podle metodiky popsané pro fázi 2 v tomto GIP. Podobně, za předpokladu, že implementace bude výkonná a spolehlivá, by bylo nutné provést PR na [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) (`Substreams data sources` Subgraph Feature) a také nový GIP pro podporu protokolu pro indexování odměn. PR a GIP může vytvořit kdokoli; nadace by pomohla se schválením Radou.
+This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval.
### 3. How much time will the process of reaching full protocol support take?
diff --git a/website/src/pages/cs/indexing/new-chain-integration.mdx b/website/src/pages/cs/indexing/new-chain-integration.mdx
index 5eb78fc9efbd..0d856bfa9374 100644
--- a/website/src/pages/cs/indexing/new-chain-integration.mdx
+++ b/website/src/pages/cs/indexing/new-chain-integration.mdx
@@ -2,7 +2,7 @@
title: New Chain Integration
---
-Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies:
+Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies:
1. **EVM JSON-RPC**
2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms.
@@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through
## EVM considerations - Difference between JSON-RPC & Firehose
-While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing.
+While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing.
-- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes.
+- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes.
-> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers)
+> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers)
## Config uzlu grafu
-Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph.
+Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph.
1. [Clone Graph Node](https://github.com/graphprotocol/graph-node)
@@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your
## Substreams-powered Subgraphs
-For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself.
+For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself.
diff --git a/website/src/pages/cs/indexing/overview.mdx b/website/src/pages/cs/indexing/overview.mdx
index 52eda54899f1..8acf4fdf72a9 100644
--- a/website/src/pages/cs/indexing/overview.mdx
+++ b/website/src/pages/cs/indexing/overview.mdx
@@ -7,7 +7,7 @@ Indexery jsou operátoři uzlů v síti Graf, kteří sázejí graf tokeny (GRT)
GRT, který je v protokolu založen, podléhá období rozmrazování a může být zkrácen, pokud jsou indexátory škodlivé a poskytují aplikacím nesprávná data nebo pokud indexují nesprávně. Indexátoři také získávají odměny za delegované sázky od delegátů, aby přispěli do sítě.
-Indexátory vybírají podgrafy k indexování na základě signálu kurátorů podgrafů, přičemž kurátoři sázejí na GRT, aby určili, které podgrafy jsou vysoce kvalitní a měly by být upřednostněny. Spotřebitelé (např. aplikace) mohou také nastavit parametry, podle kterých indexátoři zpracovávají dotazy pro jejich podgrafy, a nastavit preference pro stanovení ceny poplatků za dotazy.
+Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing.
## FAQ
@@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT.
**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity.
-**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network.
+**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network.
### How are indexing rewards distributed?
-Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.**
+Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.**
Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack.
### What is a proof of indexing (POI)?
-POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block.
+POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block.
### When are indexing rewards distributed?
@@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap
Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps:
-1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations:
+1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations:
```graphql
query indexerAllocations {
@@ -91,24 +91,24 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that
- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%.
-### How do Indexers know which subgraphs to index?
+### How do Indexers know which Subgraphs to index?
-Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network:
+Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network:
-- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up.
+- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up.
-- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand.
+- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand.
-- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply.
+- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply.
-- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards.
+- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards.
### What are the hardware requirements?
-- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded.
+- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded.
- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests.
-- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second.
-- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic.
+- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second.
+- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic.
| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) |
| --- | :-: | :-: | :-: | :-: | :-: |
@@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making
## Infrastructure
-At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network.
+At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network.
-- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions.
+- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions.
-- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
+- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
-- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway.
-- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations.
+- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations.
- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server.
@@ -149,8 +149,8 @@ Note: To support agile scaling, it is recommended that query and indexing concer
| Port | Purpose | Routes | CLI Argument | Environment Variable |
| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - |
+| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - |
| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - |
| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
@@ -159,7 +159,7 @@ Note: To support agile scaling, it is recommended that query and indexing concer
| Port | Purpose | Routes | CLI Argument | Environment Variable |
| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
+| 7600 | GraphQL HTTP server
(for paid Subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Agent
@@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`.
### Uzel Graf
-[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint.
+[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint.
#### Getting started from source
@@ -365,9 +365,9 @@ docker-compose up
To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components:
-- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each.
+- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each.
-- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways.
+- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways.
- **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules.
@@ -525,7 +525,7 @@ graph indexer status
#### Indexer management using Indexer CLI
-The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution.
+The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution.
#### Usage
@@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar
- `graph indexer rules set [options] ...` - Set one or more indexing rules.
-- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed.
+- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed.
- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index.
@@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported
#### Indexing rules
-Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing.
+Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing.
-For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`.
+For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`.
Data model:
@@ -679,7 +679,7 @@ graph indexer actions execute approve
Note that supported action types for allocation management have different input requirements:
-- `Allocate` - allocate stake to a specific subgraph deployment
+- `Allocate` - allocate stake to a specific Subgraph deployment
- required action params:
- deploymentID
@@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input
- poi
- force (forces using the provided POI even if it doesn’t match what the graph-node provides)
-- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment
+- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment
- required action params:
- allocationID
@@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input
#### Cost models
-Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
+Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
#### Agora
@@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi
6. Call `stake()` to stake GRT in the protocol.
-7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address.
+7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address.
-8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks.
+8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks.
```
setDelegationParameters(950000, 600000, 500)
@@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st
After being created by an Indexer a healthy allocation goes through two states.
-- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules.
+- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules.
- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)).
-Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically.
+Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically.
diff --git a/website/src/pages/cs/indexing/supported-network-requirements.mdx b/website/src/pages/cs/indexing/supported-network-requirements.mdx
index a81118cec231..b241acc94b41 100644
--- a/website/src/pages/cs/indexing/supported-network-requirements.mdx
+++ b/website/src/pages/cs/indexing/supported-network-requirements.mdx
@@ -6,7 +6,7 @@ title: Supported Network Requirements
| --- | --- | --- | :-: |
| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ |
| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ |
-| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)
[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ |
+| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)
[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ |
| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ |
| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ |
| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Higher clock speed over core count
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ |
diff --git a/website/src/pages/cs/indexing/tap.mdx b/website/src/pages/cs/indexing/tap.mdx
index f8d028634016..6063720aca9d 100644
--- a/website/src/pages/cs/indexing/tap.mdx
+++ b/website/src/pages/cs/indexing/tap.mdx
@@ -1,21 +1,21 @@
---
-title: TAP Migration Guide
+title: GraphTally Guide
---
-Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust.
+Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust.
## Přehled
-[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features:
+GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features:
- Efficiently handles micropayments.
- Adds a layer of consolidations to onchain transactions and costs.
- Allows Indexers control of receipts and payments, guaranteeing payment for queries.
- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders.
-## Specifics
+### Specifics
-TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process.
+GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process.
For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value.
@@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed
| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` |
| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` |
-### Požadavky
+### Prerequisites
-In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`.
+In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`.
-- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD)
-- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1)
+- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD)
+- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1)
-> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually.
+> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually.
## Migration Guide
@@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc
1. **Indexer Agent**
- Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components).
- - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs.
+ - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs.
2. **Indexer Service**
@@ -128,18 +128,18 @@ query_url = ""
status_url = ""
[subgraphs.network]
-# Query URL for the Graph Network subgraph.
+# Query URL for the Graph Network Subgraph.
query_url = ""
# Optional, deployment to look for in the local `graph-node`, if locally indexed.
-# Locally indexing the subgraph is recommended.
+# Locally indexing the Subgraph is recommended.
# NOTE: Use `query_url` or `deployment_id` only
deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
[subgraphs.escrow]
-# Query URL for the Escrow subgraph.
+# Query URL for the Escrow Subgraph.
query_url = ""
# Optional, deployment to look for in the local `graph-node`, if locally indexed.
-# Locally indexing the subgraph is recommended.
+# Locally indexing the Subgraph is recommended.
# NOTE: Use `query_url` or `deployment_id` only
deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
diff --git a/website/src/pages/cs/indexing/tooling/graph-node.mdx b/website/src/pages/cs/indexing/tooling/graph-node.mdx
index 88ddb88813fb..9257902fe247 100644
--- a/website/src/pages/cs/indexing/tooling/graph-node.mdx
+++ b/website/src/pages/cs/indexing/tooling/graph-node.mdx
@@ -2,31 +2,31 @@
title: Uzel Graf
---
-Graf Uzel je komponenta, která indexuje podgrafy a zpřístupňuje výsledná data k dotazování prostřednictvím rozhraní GraphQL API. Jako taková je ústředním prvkem zásobníku indexeru a její správná činnost je pro úspěšný provoz indexeru klíčová.
+Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer.
This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node).
## Uzel Graf
-[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query.
+[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query.
Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node).
### Databáze PostgreSQL
-Hlavní úložiště pro uzel Graf Uzel, kde jsou uložena data podgrafů, metadata o podgraf a síťová data týkající se podgrafů, jako je bloková cache a cache eth_call.
+The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache.
### Síťoví klienti
Aby mohl uzel Graph Node indexovat síť, potřebuje přístup k síťovému klientovi prostřednictvím rozhraní API JSON-RPC kompatibilního s EVM. Toto RPC se může připojit k jedinému klientovi nebo může jít o složitější nastavení, které vyrovnává zátěž mezi více klienty.
-While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)).
+While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)).
**Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/).
### IPFS uzly
-Metadata nasazení podgrafů jsou uložena v síti IPFS. Uzel Graf přistupuje během nasazení podgrafu především k uzlu IPFS, aby načetl manifest podgrafu a všechny propojené soubory. Síťové indexery nemusí hostit vlastní uzel IPFS. Uzel IPFS pro síť je hostován na adrese https://ipfs.network.thegraph.com.
+Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
### Metrický server Prometheus
@@ -79,8 +79,8 @@ Když je Graf Uzel spuštěn, zpřístupňuje následující ports:
| Port | Purpose | Routes | CLI Argument | Environment Variable |
| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - |
+| 8000 | GraphQL HTTP server
(for Subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS
(for Subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - |
| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - |
| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
@@ -89,7 +89,7 @@ Když je Graf Uzel spuštěn, zpřístupňuje následující ports:
## Pokročilá konfigurace uzlu Graf
-V nejjednodušším případě lze Graf Uzel provozovat s jednou instancí Graf Uzel, jednou databází PostgreSQL, uzlem IPFS a síťovými klienty podle potřeby indexovaných podgrafů.
+At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed.
This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables.
@@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https:
#### Více uzlů graf
-Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules).
+Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules).
> Všimněte si, že více graf uzlů lze nakonfigurovat tak, aby používaly stejnou databázi, kterou lze horizontálně škálovat pomocí sharding.
#### Pravidla nasazení
-Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision.
+Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision.
Příklad konfigurace pravidla nasazení:
@@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ]
match = { network = [ "xdai", "poa-core" ] }
indexers = [ "index_node_other_0" ]
[[deployment.rule]]
-# There's no 'match', so any subgraph matches
+# There's no 'match', so any Subgraph matches
shards = [ "sharda", "shardb" ]
indexers = [
"index_node_community_0",
@@ -167,11 +167,11 @@ Každý uzel, jehož --node-id odpovídá regulárnímu výrazu, bude nastaven t
Pro většinu případů použití postačuje k podpoře instance graf uzlu jedna databáze Postgres. Pokud instance graf uzlu přeroste rámec jedné databáze Postgres, je možné rozdělit ukládání dat grafového uzlu do více databází Postgres. Všechny databáze dohromady tvoří úložiště instance graf uzlu. Každá jednotlivá databáze se nazývá shard.
-Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed.
+Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed.
Sharding se stává užitečným, když vaše stávající databáze nedokáže udržet krok se zátěží, kterou na ni Graf Uzel vyvíjí, a když už není možné zvětšit velikost databáze.
-> Obecně je lepší vytvořit jednu co největší databázi, než začít s oddíly. Jednou z výjimek jsou případy, kdy je provoz dotazů rozdělen velmi nerovnoměrně mezi dílčí podgrafy; v těchto situacích může výrazně pomoci, pokud jsou dílčí podgrafy s velkým objemem uchovávány v jednom shardu a vše ostatní v jiném, protože toto nastavení zvyšuje pravděpodobnost, že data pro dílčí podgrafu s velkým objemem zůstanou v interní cache db a nebudou nahrazena daty, která nejsou tolik potřebná z dílčích podgrafů s malým objemem.
+> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs.
Pokud jde o konfiguraci připojení, začněte s max_connections v souboru postgresql.conf nastaveným na 400 (nebo možná dokonce 200) a podívejte se na metriky store_connection_wait_time_ms a store_connection_checkout_count Prometheus. Výrazné čekací doby (cokoli nad 5 ms) jsou známkou toho, že je k dispozici příliš málo připojení; vysoké čekací doby tam budou také způsobeny tím, že databáze je velmi vytížená (například vysoké zatížení procesoru). Pokud se však databáze jinak jeví jako stabilní, vysoké čekací doby naznačují potřebu zvýšit počet připojení. V konfiguraci je horní hranicí, kolik připojení může každá instance graf uzlu používat, a graf uzel nebude udržovat otevřená připojení, pokud je nepotřebuje.
@@ -188,7 +188,7 @@ ingestor = "block_ingestor_node"
#### Podpora více sítí
-The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of:
+The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of:
- Více sítí
- Více poskytovatelů na síť (to může umožnit rozdělení zátěže mezi poskytovatele a také konfiguraci plných uzlů i archivních uzlů, přičemž Graph Node může preferovat levnější poskytovatele, pokud to daná pracovní zátěž umožňuje).
@@ -225,11 +225,11 @@ Uživatelé, kteří provozují škálované nastavení indexování s pokročil
### Správa uzlu graf
-Vzhledem k běžícímu uzlu Graf (nebo uzlům Graf Uzel!) je pak úkolem spravovat rozmístěné podgrafy v těchto uzlech. Graf Uzel nabízí řadu nástrojů, které pomáhají se správou podgrafů.
+Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs.
#### Protokolování
-Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace.
+Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace.
In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs).
@@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker
Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs`
-### Práce s podgrafy
+### Working with Subgraphs
#### Stav indexování API
-API pro stav indexování, které je ve výchozím nastavení dostupné na portu 8030/graphql, nabízí řadu metod pro kontrolu stavu indexování pro různé podgrafy, kontrolu důkazů indexování, kontrolu vlastností podgrafů a další.
+Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more.
The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql).
@@ -263,7 +263,7 @@ Proces indexování má tři samostatné části:
- Zpracování událostí v pořadí pomocí příslušných obslužných (to může zahrnovat volání řetězce pro zjištění stavu a načtení dat z úložiště)
- Zápis výsledných dat do úložiště
-Tyto fáze jsou spojeny do potrubí (tj. mohou být prováděny paralelně), ale jsou na sobě závislé. Pokud se podgrafy indexují pomalu, bude příčina záviset na konkrétním podgrafu.
+These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph.
Běžné příčiny pomalého indexování:
@@ -276,24 +276,24 @@ Běžné příčiny pomalého indexování:
- Samotný poskytovatel se dostává za hlavu řetězu
- Pomalé načítání nových účtenek od poskytovatele v hlavě řetězce
-Metriky indexování podgrafů mohou pomoci diagnostikovat hlavní příčinu pomalého indexování. V některých případech spočívá problém v samotném podgrafu, ale v jiných případech mohou zlepšení síťových poskytovatelů, snížení konfliktů v databázi a další zlepšení konfigurace výrazně zlepšit výkon indexování.
+Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance.
-#### Neúspěšné podgrafy
+#### Failed Subgraphs
-Během indexování mohou dílčí graf selhat, pokud narazí na neočekávaná data, pokud některá komponenta nefunguje podle očekávání nebo pokud je chyba ve zpracovatelích událostí nebo v konfiguraci. Existují dva obecné typy selhání:
+During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure:
- Deterministická selhání: jedná se o selhání, která nebudou vyřešena opakovanými pokusy
- Nedeterministická selhání: mohou být způsobena problémy se zprostředkovatelem nebo neočekávanou chybou grafického uzlu. Pokud dojde k nedeterministickému selhání, uzel Graf zopakuje selhání obsluhy a postupně se vrátí zpět.
-V některých případech může být chyba řešitelná indexátorem (například pokud je chyba důsledkem toho, že není k dispozici správný typ zprostředkovatele, přidání požadovaného zprostředkovatele umožní pokračovat v indexování). V jiných případech je však nutná změna v kódu podgrafu.
+In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required.
-> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository.
+> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository.
#### Bloková a volací mezipaměť
-Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph.
+Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph.
-However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider.
+However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider.
Pokud existuje podezření na nekonzistenci blokové mezipaměti, například chybějící událost tx receipt:
@@ -304,7 +304,7 @@ Pokud existuje podezření na nekonzistenci blokové mezipaměti, například ch
#### Problémy a chyby při dotazování
-Jakmile je podgraf indexován, lze očekávat, že indexery budou obsluhovat dotazy prostřednictvím koncového bodu vyhrazeného pro dotazy podgrafu. Pokud indexátor doufá, že bude obsluhovat značný objem dotazů, doporučuje se použít vyhrazený uzel pro dotazy a v případě velmi vysokého objemu dotazů mohou indexátory chtít nakonfigurovat oddíly replik tak, aby dotazy neovlivňovaly proces indexování.
+Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process.
I s vyhrazeným dotazovacím uzlem a replikami však může provádění některých dotazů trvat dlouho a v některých případech může zvýšit využití paměti a negativně ovlivnit dobu dotazování ostatních uživatelů.
@@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat
##### Analýza dotazů
-Problematické dotazy se nejčastěji objevují jedním ze dvou způsobů. V některých případech uživatelé sami hlásí, že daný dotaz je pomalý. V takovém případě je úkolem diagnostikovat příčinu pomalosti - zda se jedná o obecný problém, nebo o specifický problém daného podgrafu či dotazu. A pak ho samozřejmě vyřešit, pokud je to možné.
+Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible.
V jiných případech může být spouštěcím faktorem vysoké využití paměti v uzlu dotazu a v takovém případě je třeba nejprve identifikovat dotaz, který problém způsobuje.
@@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the
Once a table has been determined to be account-like, running `graphman stats account-like .` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again.
-For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load.
+For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load.
-#### Odstranění podgrafů
+#### Removing Subgraphs
> Jedná se o novou funkci, která bude k dispozici v uzlu Graf 0.29.x
-At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
+At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
diff --git a/website/src/pages/cs/indexing/tooling/graphcast.mdx b/website/src/pages/cs/indexing/tooling/graphcast.mdx
index aec7d84070c3..5aa86adcc8da 100644
--- a/website/src/pages/cs/indexing/tooling/graphcast.mdx
+++ b/website/src/pages/cs/indexing/tooling/graphcast.mdx
@@ -10,10 +10,10 @@ V současné době jsou náklady na vysílání informací ostatním účastník
Graphcast SDK (Vývoj softwaru Kit) umožňuje vývojářům vytvářet rádia, což jsou aplikace napájené drby, které mohou indexery spouštět k danému účelu. Máme také v úmyslu vytvořit několik Radios (nebo poskytnout podporu jiným vývojářům/týmům, které chtějí Radios vytvořit) pro následující případy použití:
-- Křížová kontrola integrity dat subgrafu v reálném čase ([Podgraf Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)).
-- Provádění aukcí a koordinace pro warp synchronizaci podgrafů, substreamů a dat Firehose z jiných Indexerů.
-- Vlastní hlášení o analýze aktivních dotazů, včetně objemů požadavků na dílčí grafy, objemů poplatků atd.
-- Vlastní hlášení o analýze indexování, včetně času indexování podgrafů, nákladů na plyn obsluhy, zjištěných chyb indexování atd.
+- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)).
+- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers.
+- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc.
+- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc.
- Vlastní hlášení informací o zásobníku včetně verze grafového uzlu, verze Postgres, verze klienta Ethereum atd.
### Dozvědět se více
diff --git a/website/src/pages/cs/resources/benefits.mdx b/website/src/pages/cs/resources/benefits.mdx
index e18158242265..d0b336ece33a 100644
--- a/website/src/pages/cs/resources/benefits.mdx
+++ b/website/src/pages/cs/resources/benefits.mdx
@@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar
‡Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries.
-Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet.
+Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet.
-Kurátorování signálu na podgrafu je volitelný jednorázový čistý nulový náklad (např. na podgrafu lze kurátorovat signál v hodnotě $1k a později jej stáhnout - s potenciálem získat v tomto procesu výnosy).
+Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process).
## No Setup Costs & Greater Operational Efficiency
@@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy
Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally.
-Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/).
+Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/).
diff --git a/website/src/pages/cs/resources/glossary.mdx b/website/src/pages/cs/resources/glossary.mdx
index 70161f581585..49fd1f60c539 100644
--- a/website/src/pages/cs/resources/glossary.mdx
+++ b/website/src/pages/cs/resources/glossary.mdx
@@ -4,51 +4,51 @@ title: Glosář
- **The Graph**: A decentralized protocol for indexing and querying data.
-- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer.
+- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer.
-- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs.
+- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
-- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network.
+- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network.
-- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone.
+- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone.
- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries.
- **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards.
- 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network.
+ 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network.
- 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually.
+ 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually.
- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit.
- **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake.
-- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers.
+- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers.
-- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs.
+- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs.
- **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned.
-- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph.
+- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph.
-- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned.
+- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned.
-- **Data Consumer**: Any application or user that queries a subgraph.
+- **Data Consumer**: Any application or user that queries a Subgraph.
-- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network.
+- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network.
-- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example.
+- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example.
- **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day.
-- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses:
+- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses:
- 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated.
+ 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated.
- 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data.
+ 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data.
-- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs.
+- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs.
- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide.
@@ -56,28 +56,28 @@ title: Glosář
- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned.
-- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT.
+- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT.
- **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT.
- **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network.
-- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer.
+- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer.
-- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer.
+- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer.
-- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations.
+- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations.
- **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way.
-- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol.
+- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol.
- **Graph CLI**: A command line interface tool for building and deploying to The Graph.
- **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again.
-- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake.
+- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake.
-- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings.
+- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings.
-- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2).
+- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2).
diff --git a/website/src/pages/cs/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/cs/resources/migration-guides/assemblyscript-migration-guide.mdx
index 756873dd8fbb..8af6d2817679 100644
--- a/website/src/pages/cs/resources/migration-guides/assemblyscript-migration-guide.mdx
+++ b/website/src/pages/cs/resources/migration-guides/assemblyscript-migration-guide.mdx
@@ -2,13 +2,13 @@
title: Průvodce migrací AssemblyScript
---
-Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉
+Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉
-To umožní vývojářům podgrafů používat novější funkce jazyka AS a standardní knihovny.
+That will enable Subgraph developers to use newer features of the AS language and standard library.
This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂
-> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest.
+> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest.
## Funkce
@@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `
## Jak provést upgrade?
-1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`:
+1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`:
```yaml
...
@@ -52,7 +52,7 @@ dataSources:
...
mapping:
...
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
...
```
@@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null
maybeValue.aMethod()
```
-Pokud si nejste jisti, kterou verzi zvolit, doporučujeme vždy použít bezpečnou verzi. Pokud hodnota neexistuje, možná budete chtít provést pouze časný příkaz if s návratem v obsluze podgrafu.
+If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler.
### Proměnlivé stínování
@@ -132,7 +132,7 @@ Pokud jste použili stínování proměnných, musíte duplicitní proměnné p
### Nulová srovnání
-Při aktualizaci podgrafu může někdy dojít k těmto chybám:
+By doing the upgrade on your Subgraph, sometimes you might get errors like these:
```typescript
ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'.
@@ -330,7 +330,7 @@ let wrapper = new Wrapper(y)
wrapper.n = wrapper.n + x // doesn't give compile time errors as it should
```
-Otevřeli jsme kvůli tomu problém v kompilátoru jazyka AssemblyScript, ale zatím platí, že pokud provádíte tyto operace v mapování podgrafů, měli byste je změnit tak, aby se před nimi provedla kontrola null.
+We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it.
```typescript
let wrapper = new Wrapper(y)
@@ -352,7 +352,7 @@ value.x = 10
value.y = 'content'
```
-Zkompiluje se, ale za běhu se přeruší, což se stane, protože hodnota nebyla inicializována, takže se ujistěte, že váš podgraf inicializoval své hodnoty, například takto:
+It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this:
```typescript
var value = new Type() // initialized
diff --git a/website/src/pages/cs/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/cs/resources/migration-guides/graphql-validations-migration-guide.mdx
index 7f273724aff4..4051faab8eef 100644
--- a/website/src/pages/cs/resources/migration-guides/graphql-validations-migration-guide.mdx
+++ b/website/src/pages/cs/resources/migration-guides/graphql-validations-migration-guide.mdx
@@ -1,5 +1,5 @@
---
-title: Průvodce migrací na GraphQL Validace
+title: GraphQL Validations Migration Guide
---
Brzy bude `graph-node` podporovat 100% pokrytí [GraphQL Validations specifikace](https://spec.graphql.org/June2018/#sec-Validation).
@@ -20,7 +20,7 @@ Chcete-li být v souladu s těmito validacemi, postupujte podle průvodce migrac
Pomocí migračního nástroje CLI můžete najít případné problémy v operacích GraphQL a opravit je. Případně můžete aktualizovat koncový bod svého klienta GraphQL tak, aby používal koncový bod `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME`. Testování dotazů proti tomuto koncovému bodu vám pomůže najít problémy ve vašich dotazech.
-> Není nutné migrovat všechny podgrafy, pokud používáte [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) nebo [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), ty již zajistí, že vaše dotazy jsou platné.
+> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid.
## Migrační nástroj CLI
diff --git a/website/src/pages/cs/resources/roles/curating.mdx b/website/src/pages/cs/resources/roles/curating.mdx
index c8b9caf18e2e..f06866a7c0ee 100644
--- a/website/src/pages/cs/resources/roles/curating.mdx
+++ b/website/src/pages/cs/resources/roles/curating.mdx
@@ -2,37 +2,37 @@
title: Kurátorování
---
-Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index.
+Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index.
## What Does Signaling Mean for The Graph Network?
-Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed.
+Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed.
-Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives.
+Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives.
-Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them.
+Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them.
-The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
+The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
-When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
+When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
-If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with.
+If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with.
-Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below).
+Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below).
-
+
## Jak signalizovat
-Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/)
+Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/)
-Kurátor si může zvolit, zda bude signalizovat na konkrétní verzi podgrafu, nebo zda se jeho signál automaticky přenese na nejnovější produkční sestavení daného podgrafu. Obě strategie jsou platné a mají své výhody i nevýhody.
+A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons.
-Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred.
+Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred.
Automatická migrace signálu na nejnovější produkční sestavení může být cenná, protože zajistí, že se poplatky za dotazy budou neustále zvyšovat. Při každém kurátorství se platí 1% kurátorský poplatek. Při každé migraci také zaplatíte 0,5% kurátorskou daň. Vývojáři podgrafu jsou odrazováni od častého publikování nových verzí - musí zaplatit 0.5% kurátorskou daň ze všech automaticky migrovaných kurátorských podílů.
-> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy.
+> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy.
## Withdrawing your GRT
@@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time.
Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax).
-Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled.
+Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled.
-However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph.
+However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph.
## Rizika
1. Trh s dotazy je v Graf ze své podstaty mladý a existuje riziko, že vaše %APY může být nižší, než očekáváte, v důsledku dynamiky rodícího se trhu.
-2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned.
-3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/).
-4. Podgraf může selhat kvůli chybě. Za neúspěšný podgraf se neúčtují poplatky za dotaz. V důsledku toho budete muset počkat, až vývojář chybu opraví a nasadí novou verzi.
- - Pokud jste přihlášeni k odběru nejnovější verze podgrafu, vaše sdílené položky se automaticky přemigrují na tuto novou verzi. Při tom bude účtována 0,5% kurátorská daň.
- - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax.
+2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned.
+3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/).
+4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version.
+ - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax.
+ - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax.
## Nejčastější dotazy ke kurátorství
### 1. Kolik % z poplatků za dotazy kurátoři vydělávají?
-By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance.
+By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance.
-### 2. Jak se rozhodnu, které podgrafy jsou kvalitní a na kterých je třeba signalizovat?
+### 2. How do I decide which Subgraphs are high quality to signal on?
-Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result:
+Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result:
-- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future
-- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on.
+- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future
+- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on.
-### 3. Jaké jsou náklady na aktualizaci podgrafu?
+### 3. What’s the cost of updating a Subgraph?
-Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas.
+Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas.
-### 4. Jak často mohu svůj podgraf aktualizovat?
+### 4. How often can I update my Subgraph?
-Doporučujeme, abyste podgrafy neaktualizovali příliš často. Další podrobnosti naleznete v otázce výše.
+It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details.
### 5. Mohu prodat své kurátorské podíly?
diff --git a/website/src/pages/cs/resources/subgraph-studio-faq.mdx b/website/src/pages/cs/resources/subgraph-studio-faq.mdx
index a67af0f6505e..1f036fb46484 100644
--- a/website/src/pages/cs/resources/subgraph-studio-faq.mdx
+++ b/website/src/pages/cs/resources/subgraph-studio-faq.mdx
@@ -4,7 +4,7 @@ title: FAQs Podgraf Studio
## 1. Co je Podgraf Studio?
-[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys.
+[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys.
## 2. Jak vytvořím klíč API?
@@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th
Po vytvoření klíče API můžete v části Zabezpečení definovat domény, které se mohou dotazovat na konkrétní klíč API.
-## 5. Mohu svůj podgraf převést na jiného vlastníka?
+## 5. Can I transfer my Subgraph to another owner?
-Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'.
+Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'.
-Všimněte si, že po přenesení podgrafu jej již nebudete moci ve Studio zobrazit ani upravovat.
+Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred.
-## 6. Jak najdu adresy URL dotazů pro podgrafy, pokud nejsem Vývojář podgrafu, který chci použít?
+## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use?
-You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio.
+You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio.
-Nezapomeňte, že si můžete vytvořit klíč API a dotazovat se na libovolný podgraf zveřejněný v síti, i když si podgraf vytvoříte sami. Tyto dotazy prostřednictvím nového klíče API jsou placené dotazy jako jakékoli jiné v síti.
+Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network.
diff --git a/website/src/pages/cs/resources/tokenomics.mdx b/website/src/pages/cs/resources/tokenomics.mdx
index 92b1514574b4..66eefd5b8b1a 100644
--- a/website/src/pages/cs/resources/tokenomics.mdx
+++ b/website/src/pages/cs/resources/tokenomics.mdx
@@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s
## Přehled
-The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph.
+The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph.
## Specifics
@@ -24,9 +24,9 @@ There are four primary network participants:
1. Delegators - Delegate GRT to Indexers & secure the network
-2. Kurátoři - nalezení nejlepších podgrafů pro indexátory
+2. Curators - Find the best Subgraphs for Indexers
-3. Developers - Build & query subgraphs
+3. Developers - Build & query Subgraphs
4. Indexery - páteř blockchainových dat
@@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth
## Delegators (Passively earn GRT)
-Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually.
+Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually.
For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually.
@@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head
## Curators (Earn GRT)
-Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed.
+Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed.
-Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation.
+Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation.
-Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT.
+Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT.
## Developers
-Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants.
+Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants.
-### Vytvoření podgrafu
+### Creating a Subgraph
-Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers.
+Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers.
-Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network.
+Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network.
-### Dotazování na existující podgraf
+### Querying an existing Subgraph
-Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph.
+Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph.
Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol.
@@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th
## Indexers (Earn GRT)
-Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs.
+Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs.
Indexers can earn GRT rewards in two ways:
-1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)).
+1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)).
-2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately.
+2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately.
-Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph.
+Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph.
In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve.
-Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network.
+Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network.
The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors.
## Token Supply: Burning & Issuance
-The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network.
+The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network.
-The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data.
+The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data.

diff --git a/website/src/pages/cs/sps/introduction.mdx b/website/src/pages/cs/sps/introduction.mdx
index f0180d6a569b..4938d23102e4 100644
--- a/website/src/pages/cs/sps/introduction.mdx
+++ b/website/src/pages/cs/sps/introduction.mdx
@@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs
sidebarTitle: Úvod
---
-Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data.
+Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data.
## Přehled
-Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks.
+Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks.
### Specifics
There are two methods of enabling this technology:
-1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph.
+1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph.
-2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities.
+2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities.
-You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node.
+You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node.
### Další zdroje
@@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil
- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
diff --git a/website/src/pages/cs/sps/sps-faq.mdx b/website/src/pages/cs/sps/sps-faq.mdx
index 657b027cf5e9..25e77dc3c7f1 100644
--- a/website/src/pages/cs/sps/sps-faq.mdx
+++ b/website/src/pages/cs/sps/sps-faq.mdx
@@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi
Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams.
-## Co jsou substreamu napájen podgrafy?
+## What are Substreams-powered Subgraphs?
-[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities.
+[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities.
-If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API.
+If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API.
-## Jak se liší substream, které jsou napájeny podgrafy, od podgrafů?
+## How are Substreams-powered Subgraphs different from Subgraphs?
Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain.
-By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times.
+By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times.
-## Jaké jsou výhody používání substreamu, které jsou založeny na podgraf?
+## What are the benefits of using Substreams-powered Subgraphs?
-Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka.
+Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka.
## Jaké jsou výhody Substreams?
@@ -35,7 +35,7 @@ Používání ubstreams má mnoho výhod, mimo jiné:
- Vysoce výkonné indexování: Řádově rychlejší indexování prostřednictvím rozsáhlých klastrů paralelních operací (viz BigQuery).
-- Umyvadlo kdekoli: Data můžete ukládat kamkoli chcete: Vložte data do PostgreSQL, MongoDB, Kafka, podgrafy, ploché soubory, tabulky Google.
+- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets.
- Programovatelné: Pomocí kódu můžete přizpůsobit extrakci, provádět agregace v čase transformace a modelovat výstup pro více zdrojů.
@@ -63,17 +63,17 @@ Používání Firehose přináší mnoho výhod, včetně:
- Využívá ploché soubory: Blockchain data jsou extrahována do plochých souborů, což je nejlevnější a nejoptimálnější dostupný výpočetní zdroj.
-## Kde mohou vývojáři získat více informací o substreamu, které jsou založeny na podgraf a substreamu?
+## Where can developers access more information about Substreams-powered Subgraphs and Substreams?
The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules.
-The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph.
+The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph.
The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code.
## Jaká je role modulů Rust v Substreamu?
-Moduly Rust jsou ekvivalentem mapovačů AssemblyScript v podgraf. Jsou kompilovány do WASM podobným způsobem, ale programovací model umožňuje paralelní provádění. Definují druh transformací a agregací, které chcete aplikovat na surová data blockchainu.
+Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data.
See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details.
@@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst
Při použití substreamů probíhá kompozice na transformační vrstvě, což umožňuje opakované použití modulů uložených v mezipaměti.
-Jako příklad může Alice vytvořit cenový modul DEX, Bob jej může použít k vytvoření agregátoru objemu pro některé tokeny, které ho zajímají, a Lisa může zkombinovat čtyři jednotlivé cenové moduly DEX a vytvořit cenové orákulum. Jediný požadavek Substreams zabalí všechny moduly těchto jednotlivců, propojí je dohromady a nabídne mnohem sofistikovanější tok dat. Tento proud pak může být použit k naplnění podgrafu a může být dotazován spotřebiteli.
+As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers.
## Jak můžete vytvořit a nasadit Substreams využívající podgraf?
After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/).
-## Kde najdu příklady podgrafů Substreams a Substreams-powered?
+## Where can I find examples of Substreams and Substreams-powered Subgraphs?
-Příklady podgrafů Substreams a Substreams-powered najdete na [tomto repozitáři Github](https://github.com/pinax-network/awesome-substreams).
+You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs.
-## Co znamenají substreams a podgrafy napájené substreams pro síť grafů?
+## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network?
Integrace slibuje mnoho výhod, včetně extrémně výkonného indexování a větší složitelnosti díky využití komunitních modulů a stavění na nich.
diff --git a/website/src/pages/cs/sps/triggers.mdx b/website/src/pages/cs/sps/triggers.mdx
index 06a8845e4daf..b0c4bea23f3d 100644
--- a/website/src/pages/cs/sps/triggers.mdx
+++ b/website/src/pages/cs/sps/triggers.mdx
@@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL.
## Přehled
-Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer.
+Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer.
-By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework.
+By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework.
### Defining `handleTransactions`
-The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created.
+The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created.
```tsx
export function handleTransactions(bytes: Uint8Array): void {
@@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file:
1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object
2. Looping over the transactions
-3. Create a new subgraph entity for every transaction
+3. Create a new Subgraph entity for every transaction
-To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/).
+To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/).
### Další zdroje
diff --git a/website/src/pages/cs/sps/tutorial.mdx b/website/src/pages/cs/sps/tutorial.mdx
index 3f98c57508bd..67d564483af1 100644
--- a/website/src/pages/cs/sps/tutorial.mdx
+++ b/website/src/pages/cs/sps/tutorial.mdx
@@ -3,7 +3,7 @@ title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana'
sidebarTitle: Tutorial
---
-Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token.
+Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token.
## Začněte
@@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs
### Step 2: Generate the Subgraph Manifest
-Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container:
+Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container:
```bash
substreams codegen subgraph
@@ -73,7 +73,7 @@ dataSources:
moduleName: map_spl_transfers # Module defined in the substreams.yaml
file: ./my-project-sol-v0.1.0.spkg
mapping:
- apiVersion: 0.0.7
+ apiVersion: 0.0.9
kind: substreams/graph-entities
file: ./src/mappings.ts
handler: handleTriggers
@@ -81,7 +81,7 @@ dataSources:
### Step 3: Define Entities in `schema.graphql`
-Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file.
+Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file.
Here is an example:
@@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s
With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory.
-The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id:
+The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id:
```ts
import { Protobuf } from 'as-proto/assembly'
@@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command:
npm run protogen
```
-This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler.
+This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler.
### Závěr
-Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case.
+Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case.
### Video Tutorial
diff --git a/website/src/pages/cs/subgraphs/_meta-titles.json b/website/src/pages/cs/subgraphs/_meta-titles.json
index 3fd405eed29a..c2d850dfc35c 100644
--- a/website/src/pages/cs/subgraphs/_meta-titles.json
+++ b/website/src/pages/cs/subgraphs/_meta-titles.json
@@ -2,5 +2,5 @@
"querying": "Querying",
"developing": "Developing",
"guides": "How-to Guides",
- "best-practices": "Best Practices"
+ "best-practices": "Osvědčené postupy"
}
diff --git a/website/src/pages/cs/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/cs/subgraphs/best-practices/avoid-eth-calls.mdx
index 3ce9c29a17a0..2783957614bf 100644
--- a/website/src/pages/cs/subgraphs/best-practices/avoid-eth-calls.mdx
+++ b/website/src/pages/cs/subgraphs/best-practices/avoid-eth-calls.mdx
@@ -1,19 +1,19 @@
---
title: Doporučený postup pro podgraf 4 - Zlepšení rychlosti indexování vyhnutím se eth_calls
-sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls'
+sidebarTitle: Avoiding eth_calls
---
## TLDR
-`eth_calls` jsou volání, která lze provést z podgrafu do uzlu Ethereum. Tato volání zabírají značnou dobu, než vrátí data, což zpomaluje indexování. Pokud je to možné, navrhněte chytré kontrakty tak, aby emitovaly všechna potřebná data, takže nebudete muset používat `eth_calls`.
+`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`.
## Proč je dobré se vyhnout `eth_calls`
-Podgraf jsou optimalizovány pro indexování dat událostí emitovaných z chytré smlouvy. Podgraf může také indexovat data pocházející z `eth_call`, což však může indexování podgrafu výrazně zpomalit, protože `eth_calls` vyžadují externí volání chytrých smluv. Odezva těchto volání nezávisí na podgrafu, ale na konektivitě a odezvě dotazovaného uzlu Ethereum. Minimalizací nebo eliminací eth_calls v našich podgrafech můžeme výrazně zvýšit rychlost indexování.
+Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed.
### Jak vypadá eth_call?
-`eth_calls` jsou často nutné, pokud data potřebná pro podgraf nejsou dostupná prostřednictvím emitovaných událostí. Uvažujme například scénář, kdy podgraf potřebuje zjistit, zda jsou tokeny ERC20 součástí určitého poolu, ale smlouva emituje pouze základní událost `Transfer` a neemituje událost, která by obsahovala data, která potřebujeme:
+`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need:
```yaml
event Transfer(address indexed from, address indexed to, uint256 value);
@@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void {
}
```
-To je funkční, ale není to ideální, protože to zpomaluje indexování našeho podgrafu.
+This is functional, however is not ideal as it slows down our Subgraph’s indexing.
## Jak odstranit `eth_calls`
@@ -54,7 +54,7 @@ V ideálním případě by měl být inteligentní kontrakt aktualizován tak, a
event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo);
```
-Díky této aktualizaci může podgraf přímo indexovat požadovaná data bez externích volání:
+With this update, the Subgraph can directly index the required data without external calls:
```typescript
import { Address } from '@graphprotocol/graph-ts'
@@ -96,11 +96,11 @@ calls:
Samotná obslužná rutina přistupuje k výsledku tohoto `eth_call` přesně tak, jak je uvedeno v předchozí části, a to navázáním na smlouvu a provedením volání. graph-node cachuje výsledky deklarovaných `eth_call` v paměti a volání obslužné rutiny získá výsledek z této paměťové cache místo skutečného volání RPC.
-Poznámka: Deklarované eth_calls lze provádět pouze v podgraf s verzí specVersion >= 1.2.0.
+Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0.
## Závěr
-You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs.
+You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs.
## Subgraph Best Practices 1-6
diff --git a/website/src/pages/cs/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/cs/subgraphs/best-practices/derivedfrom.mdx
index f6ec5a660bf2..fc9dce04c8c0 100644
--- a/website/src/pages/cs/subgraphs/best-practices/derivedfrom.mdx
+++ b/website/src/pages/cs/subgraphs/best-practices/derivedfrom.mdx
@@ -1,11 +1,11 @@
---
title: Podgraf Doporučený postup 2 - Zlepšení indexování a rychlosti dotazů pomocí @derivedFrom
-sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom'
+sidebarTitle: Arrays with @derivedFrom
---
## TLDR
-Pole ve vašem schématu mohou skutečně zpomalit výkon podgrafu, pokud jejich počet přesáhne tisíce položek. Pokud je to možné, měla by se při použití polí používat direktiva `@derivedFrom`, která zabraňuje vzniku velkých polí, zjednodušuje obslužné programy a snižuje velikost jednotlivých entit, čímž výrazně zvyšuje rychlost indexování a výkon dotazů.
+Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly.
## Jak používat směrnici `@derivedFrom`
@@ -15,7 +15,7 @@ Stačí ve schématu za pole přidat směrnici `@derivedFrom`. Takto:
comments: [Comment!]! @derivedFrom(field: "post")
```
-`@derivedFrom` vytváří efektivní vztahy typu one-to-many, které umožňují dynamické přiřazení entity k více souvisejícím entitám na základě pole v související entitě. Tento přístup odstraňuje nutnost ukládat duplicitní data na obou stranách vztahu, čímž se podgraf stává efektivnějším.
+`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient.
### Příklad případu použití pro `@derivedFrom`
@@ -60,17 +60,17 @@ type Comment @entity {
Pouhým přidáním direktivy `@derivedFrom` bude toto schéma ukládat "Komentáře“ pouze na straně "Komentáře“ vztahu a nikoli na straně "Příspěvek“ vztahu. Pole se ukládají napříč jednotlivými řádky, což umožňuje jejich výrazné rozšíření. To může vést k obzvláště velkým velikostem, pokud je jejich růst neomezený.
-Tím se nejen zefektivní náš podgraf, ale také se odemknou tři funkce:
+This will not only make our Subgraph more efficient, but it will also unlock three features:
1. Můžeme se zeptat na `Post` a zobrazit všechny jeho komentáře.
2. Můžeme provést zpětné vyhledávání a dotazovat se na jakýkoli `Komentář` a zjistit, ze kterého příspěvku pochází.
-3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings.
+3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings.
## Závěr
-Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval.
+Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval.
For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/).
diff --git a/website/src/pages/cs/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/cs/subgraphs/best-practices/grafting-hotfix.mdx
index 7a2dbdda86f6..541cf76d0f7a 100644
--- a/website/src/pages/cs/subgraphs/best-practices/grafting-hotfix.mdx
+++ b/website/src/pages/cs/subgraphs/best-practices/grafting-hotfix.mdx
@@ -1,26 +1,26 @@
---
title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment
-sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing'
+sidebarTitle: Grafting and Hotfixing
---
## TLDR
-Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones.
+Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones.
### Přehled
-This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services.
+This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services.
## Benefits of Grafting for Hotfixes
1. **Rapid Deployment**
- - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing.
- - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted.
+ - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing.
+ - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted.
2. **Data Preservation**
- - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records.
+ - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records.
- **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data.
3. **Efficiency**
@@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati
1. **Initial Deployment Without Grafting**
- - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected.
- - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes.
+ - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected.
+ - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes.
2. **Implementing the Hotfix with Grafting**
- **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event.
- - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix.
- - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph.
- - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible.
+ - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix.
+ - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph.
+ - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible.
3. **Post-Hotfix Actions**
- - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue.
- - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance.
+ - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue.
+ - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance.
> Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance.
- - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph.
+ - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph.
4. **Important Considerations**
- **Careful Block Selection**: Choose the graft block number carefully to prevent data loss.
- **Tip**: Use the block number of the last correctly processed event.
- - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID.
- - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment.
- - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features.
+ - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID.
+ - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment.
+ - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features.
## Example: Deploying a Hotfix with Grafting
-Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix.
+Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix.
1. **Failed Subgraph Manifest (subgraph.yaml)**
```yaml
- specVersion: 1.0.0
+ specVersion: 1.3.0
schema:
file: ./schema.graphql
dataSources:
@@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
startBlock: 5000000
mapping:
kind: ethereum/events
- apiVersion: 0.0.7
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Withdrawal
@@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
2. **New Grafted Subgraph Manifest (subgraph.yaml)**
```yaml
- specVersion: 1.0.0
+ specVersion: 1.3.0
schema:
file: ./schema.graphql
dataSources:
@@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
startBlock: 6000001 # Block after the last indexed block
mapping:
kind: ethereum/events
- apiVersion: 0.0.7
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Withdrawal
@@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
features:
- grafting
graft:
- base: QmBaseDeploymentID # Deployment ID of the failed subgraph
+ base: QmBaseDeploymentID # Deployment ID of the failed Subgraph
block: 6000000 # Last successfully indexed block
```
**Explanation:**
-- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract.
+- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract.
- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error.
- **Grafting Configuration**:
- - **base**: Deployment ID of the failed subgraph.
+ - **base**: Deployment ID of the failed Subgraph.
- **block**: Block number where grafting should begin.
3. **Deployment Steps**
@@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
- **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations.
- **Deploy the Subgraph**:
- Authenticate with the Graph CLI.
- - Deploy the new subgraph using `graph deploy`.
+ - Deploy the new Subgraph using `graph deploy`.
4. **Post-Deployment**
- - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point.
+ - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point.
- **Monitor Data**: Ensure that new data is being captured and the hotfix is effective.
- **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability.
@@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance.
-- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema.
+- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema.
- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing.
-- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability.
+- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability.
### Risk Management
@@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec
## Závěr
-Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to:
+Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to:
- **Quickly Recover** from critical errors without re-indexing.
- **Preserve Historical Data**, maintaining continuity for applications and users.
- **Ensure Service Availability** by minimizing downtime during critical fixes.
-However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability.
+However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability.
## Další zdroje
- **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting
- **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID.
-By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable.
+By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable.
## Subgraph Best Practices 1-6
diff --git a/website/src/pages/cs/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/cs/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx
index 5b058ee9d7cf..e4e191353476 100644
--- a/website/src/pages/cs/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx
+++ b/website/src/pages/cs/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx
@@ -1,6 +1,6 @@
---
title: Osvědčený postup 3 - Zlepšení indexování a výkonu dotazů pomocí neměnných entit a bytů jako ID
-sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs'
+sidebarTitle: Immutable Entities and Bytes as IDs
---
## TLDR
@@ -50,12 +50,12 @@ I když jsou možné i jiné typy ID, například String a Int8, doporučuje se
### Důvody, proč nepoužívat bajty jako IDs
1. Pokud musí být IDs entit čitelné pro člověka, například automaticky doplňované číselné IDs nebo čitelné řetězce, neměly by být použity bajty pro IDs.
-2. Při integraci dat podgrafu s jiným datovým modelem, který nepoužívá bajty jako IDs, by se bajty jako IDs neměly používat.
+2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used.
3. Zlepšení výkonu indexování a dotazování není žádoucí.
### Konkatenace s byty jako IDs
-V mnoha podgrafech se běžně používá spojování řetězců ke spojení dvou vlastností události do jediného ID, například pomocí `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. Protože se však tímto způsobem vrací řetězec, značně to zhoršuje indexování podgrafů a výkonnost dotazování.
+It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance.
Místo toho bychom měli použít metodu `concatI32()` pro spojování vlastností událostí. Výsledkem této strategie je ID `Bytes`, které je mnohem výkonnější.
@@ -172,7 +172,7 @@ Odpověď na dotaz:
## Závěr
-Bylo prokázáno, že použití neměnných entit i bytů jako ID výrazně zvyšuje efektivitu podgrafů. Testy konkrétně ukázaly až 28% nárůst výkonu dotazů a až 48% zrychlení indexace.
+Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds.
Více informací o používání nezměnitelných entit a bytů jako ID najdete v tomto příspěvku na blogu Davida Lutterkorta, softwarového inženýra ve společnosti Edge & Node: [Dvě jednoduchá vylepšení výkonu podgrafu](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/).
diff --git a/website/src/pages/cs/subgraphs/best-practices/pruning.mdx b/website/src/pages/cs/subgraphs/best-practices/pruning.mdx
index e6b23f71c409..6fd068f449d6 100644
--- a/website/src/pages/cs/subgraphs/best-practices/pruning.mdx
+++ b/website/src/pages/cs/subgraphs/best-practices/pruning.mdx
@@ -1,11 +1,11 @@
---
title: Doporučený postup 1 - Zlepšení rychlosti dotazu pomocí ořezávání podgrafů
-sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints'
+sidebarTitle: Pruning with indexerHints
---
## TLDR
-[Pruning](/developing/creating-a-subgraph/#prune) odstraní archivní entity z databáze podgrafu až do daného bloku a odstranění nepoužívaných entit z databáze podgrafu zlepší výkonnost dotazu podgrafu, často výrazně. Použití `indexerHints` je snadný způsob, jak podgraf ořezat.
+[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph.
## Jak prořezat podgraf pomocí `indexerHints`
@@ -13,14 +13,14 @@ Přidejte do manifestu sekci `indexerHints`.
`indexerHints` má tři možnosti `prune`:
-- `prune: auto`: Udržuje minimální potřebnou historii nastavenou indexátorem, čímž optimalizuje výkon dotazu. Toto je obecně doporučené nastavení a je výchozí pro všechny podgrafy vytvořené pomocí `graph-cli` >= 0.66.0.
+- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0.
- `prune: `: Nastaví vlastní omezení počtu historických bloků, které se mají zachovat.
- `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired.
-Aktualizací souboru `subgraph.yaml` můžeme do podgrafů přidat `indexerHints`:
+We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`:
```yaml
-specVersion: 1.0.0
+specVersion: 1.3.0
schema:
file: ./schema.graphql
indexerHints:
@@ -39,7 +39,7 @@ dataSources:
## Závěr
-Ořezávání pomocí `indexerHints` je osvědčeným postupem pro vývoj podgrafů, který nabízí významné zlepšení výkonu dotazů.
+Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements.
## Subgraph Best Practices 1-6
diff --git a/website/src/pages/cs/subgraphs/best-practices/timeseries.mdx b/website/src/pages/cs/subgraphs/best-practices/timeseries.mdx
index f35ab0913563..dae73ede9ff3 100644
--- a/website/src/pages/cs/subgraphs/best-practices/timeseries.mdx
+++ b/website/src/pages/cs/subgraphs/best-practices/timeseries.mdx
@@ -1,11 +1,11 @@
---
title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations
-sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations'
+sidebarTitle: Timeseries and Aggregations
---
## TLDR
-Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance.
+Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance.
## Přehled
@@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri
## How to Implement Timeseries and Aggregations
+### Prerequisites
+
+You need `spec version 1.1.0` for this feature.
+
### Defining Timeseries Entities
A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements:
@@ -51,7 +55,7 @@ Příklad:
type Data @entity(timeseries: true) {
id: Int8!
timestamp: Timestamp!
- price: BigDecimal!
+ amount: BigDecimal!
}
```
@@ -68,11 +72,11 @@ Příklad:
type Stats @aggregation(intervals: ["hour", "day"], source: "Data") {
id: Int8!
timestamp: Timestamp!
- sum: BigDecimal! @aggregate(fn: "sum", arg: "price")
+ sum: BigDecimal! @aggregate(fn: "sum", arg: "amount")
}
```
-In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum.
+In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum.
### Querying Aggregated Data
@@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar
### Závěr
-Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach:
+Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach:
- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead.
- Simplifies Development: Eliminates the need for manual aggregation logic in mappings.
- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness.
-By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs.
+By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs.
## Subgraph Best Practices 1-6
diff --git a/website/src/pages/cs/subgraphs/billing.mdx b/website/src/pages/cs/subgraphs/billing.mdx
index 4118bf1d451a..b78c375c4aee 100644
--- a/website/src/pages/cs/subgraphs/billing.mdx
+++ b/website/src/pages/cs/subgraphs/billing.mdx
@@ -4,12 +4,14 @@ title: Fakturace
## Querying Plans
-There are two plans to use when querying subgraphs on The Graph Network.
+There are two plans to use when querying Subgraphs on The Graph Network.
- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp.
- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases.
+Learn more about pricing [here](https://thegraph.com/studio-pricing/).
+
## Query Payments with credit card
diff --git a/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx b/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx
index 4fbf2b573c14..e8db267667c0 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx
@@ -4,9 +4,9 @@ title: Advanced Subgraph Features
## Přehled
-Add and implement advanced subgraph features to enhanced your subgraph's built.
+Add and implement advanced Subgraph features to enhanced your Subgraph's built.
-Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below:
+Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below:
| Feature | Name |
| ---------------------------------------------------- | ---------------- |
@@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar
| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` |
| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` |
-For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be:
+For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be:
```yaml
-specVersion: 0.0.4
+specVersion: 1.3.0
description: Gravatar for Ethereum
features:
- fullTextSearch
@@ -25,7 +25,7 @@ features:
dataSources: ...
```
-> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used.
+> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used.
## Timeseries and Aggregations
@@ -33,9 +33,9 @@ Prerequisites:
- Subgraph specVersion must be ≥1.1.0.
-Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more.
+Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more.
-This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL.
+This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL.
### Example Schema
@@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified
## Nefatální
-Chyby indexování v již synchronizovaných podgrafech ve výchozím nastavení způsobí selhání podgrafy a zastavení synchronizace. Podgrafy lze alternativně nakonfigurovat tak, aby pokračovaly v synchronizaci i při přítomnosti chyb, a to ignorováním změn provedených obslužnou rutinou, která chybu vyvolala. To dává autorům podgrafů čas na opravu jejich podgrafů, zatímco dotazy jsou nadále obsluhovány proti poslednímu bloku, ačkoli výsledky mohou být nekonzistentní kvůli chybě, která chybu způsobila. Všimněte si, že některé chyby jsou stále fatální. Aby chyba nebyla fatální, musí být známo, že je deterministická.
+Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic.
-> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio.
+> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio.
-Povolení nefatálních chyb vyžaduje nastavení následujícího příznaku funkce v manifestu podgraf:
+Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest:
```yaml
-specVersion: 0.0.4
+specVersion: 1.3.0
description: Gravatar for Ethereum
features:
- nonFatalErrors
...
```
-The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example:
+The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example:
```graphql
foos(first: 100, subgraphError: allow) {
@@ -123,7 +123,7 @@ _meta {
}
```
-If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response:
+If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response:
```graphql
"data": {
@@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a
## IPFS/Arweave File Data Sources
-Zdroje dat souborů jsou novou funkcí podgrafu pro přístup k datům mimo řetězec během indexování robustním a rozšiřitelným způsobem. Zdroje souborových dat podporují načítání souborů ze systému IPFS a z Arweave.
+File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave.
> To také vytváří základ pro deterministické indexování dat mimo řetězec a potenciální zavedení libovolných dat ze zdrojů HTTP.
@@ -221,7 +221,7 @@ templates:
- name: TokenMetadata
kind: file/ipfs
mapping:
- apiVersion: 0.0.7
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/mapping.ts
handler: handleMetadata
@@ -290,7 +290,7 @@ Příklad:
import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates'
const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm'
-//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs.
+//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs.
export function handleTransfer(event: TransferEvent): void {
let token = Token.load(event.params.tokenId.toString())
@@ -317,23 +317,23 @@ Tím se vytvoří nový zdroj dat souborů, který bude dotazovat nakonfigurovan
This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity.
-> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file
+> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file
Gratulujeme, používáte souborové zdroje dat!
-#### Nasazení podgrafů
+#### Deploying your Subgraphs
-You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0.
+You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0.
#### Omezení
-Zpracovatelé a entity zdrojů dat souborů jsou izolovány od ostatních entit podgrafů, což zajišťuje, že jsou při provádění deterministické a nedochází ke kontaminaci zdrojů dat založených na řetězci. Přesněji řečeno:
+File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific:
- Entity vytvořené souborovými zdroji dat jsou neměnné a nelze je aktualizovat
- Obsluhy zdrojů dat souborů nemohou přistupovat k entita z jiných zdrojů dat souborů
- K entita přidruženým k datovým zdrojům souborů nelze přistupovat pomocí zpracovatelů založených na řetězci
-> Ačkoli by toto omezení nemělo být pro většinu případů použití problematické, pro některé může představovat složitost. Pokud máte problémy s modelováním dat založených na souborech v podgrafu, kontaktujte nás prosím prostřednictvím služby Discord!
+> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph!
Kromě toho není možné vytvářet zdroje dat ze zdroje dat souborů, ať už se jedná o zdroj dat v řetězci nebo jiný zdroj dat souborů. Toto omezení může být v budoucnu zrušeno.
@@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra
> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0`
-Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments.
+Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments.
-- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data.
+- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data.
-- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain.
+- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain.
### How Topic Filters Work
-When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments.
+When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments.
- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event.
@@ -401,7 +401,7 @@ In this example:
#### Configuration in Subgraphs
-Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured:
+Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured:
```yaml
eventHandlers:
@@ -436,7 +436,7 @@ In this configuration:
- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender.
- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver.
-- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`.
+- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`.
#### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses
@@ -452,17 +452,17 @@ In this configuration:
- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender.
- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver.
-- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses.
+- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses.
## Declared eth_call
> Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node.
-Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel.
+Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel.
This feature does the following:
-- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency.
+- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency.
- Allows faster data fetching, resulting in quicker query responses and a better user experience.
- Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient.
@@ -474,7 +474,7 @@ This feature does the following:
#### Scenario without Declarative `eth_calls`
-Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings.
+Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings.
Traditionally, these calls might be made sequentially:
@@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds
#### How it Works
-1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel.
+1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel.
2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously.
-3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing.
+3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing.
#### Example Configuration in Subgraph Manifest
Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`.
-`Subgraph.yaml` using `event.address`:
+`subgraph.yaml` using `event.address`:
```yaml
eventHandlers:
@@ -524,7 +524,7 @@ Details for the example above:
- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)`
- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed.
-`Subgraph.yaml` using `event.params`
+`subgraph.yaml` using `event.params`
```yaml
calls:
@@ -535,22 +535,22 @@ calls:
> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network).
-When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed.
+When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed.
-A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level:
+A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level:
```yaml
description: ...
graft:
- base: Qm... # Subgraph ID of base subgraph
+ base: Qm... # Subgraph ID of base Subgraph
block: 7345624 # Block number
```
-When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph.
+When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph.
-Protože se při roubování základní data spíše kopírují než indexují, je mnohem rychlejší dostat podgraf do požadovaného bloku než při indexování od nuly, i když počáteční kopírování dat může u velmi velkých podgrafů trvat i několik hodin. Během inicializace roubovaného podgrafu bude uzel Graf Uzel zaznamenávat informace o typů entit, které již byly zkopírovány.
+Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied.
-Štěpovaný podgraf může používat schéma GraphQL, které není totožné se schématem základního podgrafu, ale je s ním pouze kompatibilní. Musí to být platné schéma podgrafu jako takové, ale může se od schématu základního podgrafu odchýlit následujícími způsoby:
+The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways:
- Přidává nebo odebírá typy entit
- Odstraňuje atributy z typů entit
@@ -560,4 +560,4 @@ Protože se při roubování základní data spíše kopírují než indexují,
- Přidává nebo odebírá rozhraní
- Mění se, pro které typy entit je rozhraní implementováno
-> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest.
+> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest.
diff --git a/website/src/pages/cs/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/cs/subgraphs/developing/creating/assemblyscript-mappings.mdx
index fad0d6ebaa1a..00fb7cbcf275 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/assemblyscript-mappings.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/assemblyscript-mappings.mdx
@@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t
For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled.
-In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events:
+In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events:
```javascript
import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity'
@@ -72,7 +72,7 @@ Pokud není pro pole v nové entitě se stejným ID nastavena žádná hodnota,
## Generování kódu
-Aby byla práce s inteligentními smlouvami, událostmi a entitami snadná a typově bezpečná, může Graf CLI generovat typy AssemblyScript ze schématu GraphQL podgrafu a ABI smluv obsažených ve zdrojích dat.
+In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources.
To se provádí pomocí
@@ -80,7 +80,7 @@ To se provádí pomocí
graph codegen [--output-dir ] []
```
-but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same:
+but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same:
```sh
# Yarn
@@ -90,7 +90,7 @@ yarn codegen
npm run codegen
```
-This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with.
+This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with.
```javascript
import {
@@ -102,12 +102,12 @@ import {
} from '../generated/Gravity/Gravity'
```
-In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with
+In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with
```javascript
import { Gravatar } from '../generated/schema'
```
-> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph.
+> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph.
-Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find.
+Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find.
diff --git a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md
index 5d90888ac378..5f964d3cbb78 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md
+++ b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md
@@ -1,5 +1,11 @@
# @graphprotocol/graph-ts
+## 0.38.0
+
+### Minor Changes
+
+- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings
+
## 0.37.0
### Minor Changes
diff --git a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx
index 3c3dbdc7671f..87734452737d 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx
@@ -2,12 +2,12 @@
title: AssemblyScript API
---
-> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/).
+> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/).
-Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box:
+Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box:
- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`)
-- Code generated from subgraph files by `graph codegen`
+- Code generated from Subgraph files by `graph codegen`
You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript).
@@ -27,7 +27,7 @@ Knihovna `@graphprotocol/graph-ts` poskytuje následující API:
### Verze
-`apiVersion` v manifestu podgrafu určuje verzi mapovacího API, kterou pro daný podgraf používá uzel Graf.
+The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph.
| Verze | Poznámky vydání |
| :-: | --- |
@@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts'
API `store` umožňuje načítat, ukládat a odebírat entity z a do úložiště Graf uzel.
-Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities.
+Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities.
#### Vytváření entity
@@ -282,8 +282,8 @@ Od verzí `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 a `@graphproto
The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists.
-- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip.
-- For some subgraphs, these missed lookups can contribute significantly to the indexing time.
+- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip.
+- For some Subgraphs, these missed lookups can contribute significantly to the indexing time.
```typescript
let id = event.transaction.hash // or however the ID is constructed
@@ -380,11 +380,11 @@ Ethereum API poskytuje přístup k inteligentním smlouvám, veřejným stavový
#### Podpora typů Ethereum
-Stejně jako u entit generuje `graph codegen` třídy pro všechny inteligentní smlouvy a události používané v podgrafu. Za tímto účelem musí být ABI kontraktu součástí zdroje dat v manifestu podgrafu. Obvykle jsou soubory ABI uloženy ve složce `abis/`.
+As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder.
-Ve vygenerovaných třídách probíhají konverze mezi typy Ethereum [built-in-types](#built-in-types) v pozadí, takže se o ně autoři podgraf nemusí starat.
+With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them.
-To ilustruje následující příklad. Je dáno schéma podgrafu, jako je
+The following example illustrates this. Given a Subgraph schema like
```graphql
type Transfer @entity {
@@ -483,7 +483,7 @@ class Log {
#### Přístup ke stavu inteligentní smlouvy
-Kód vygenerovaný nástrojem `graph codegen` obsahuje také třídy pro inteligentní smlouvy používané v podgrafu. Ty lze použít k přístupu k veřejným stavovým proměnným a k volání funkcí kontraktu v aktuálním bloku.
+The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block.
Běžným vzorem je přístup ke smlouvě, ze které událost pochází. Toho lze dosáhnout pomocí následujícího kódu:
@@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) {
Pokud má smlouva `ERC20Contract` na platformě Ethereum veřejnou funkci pouze pro čtení s názvem `symbol`, lze ji volat pomocí `.symbol()`. Pro veřejné stavové proměnné se automaticky vytvoří metoda se stejným názvem.
-Jakákoli jiná smlouva, která je součástí podgrafu, může být importována z vygenerovaného kódu a může být svázána s platnou adresou.
+Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address.
#### Zpracování vrácených volání
@@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false
import { log } from '@graphprotocol/graph-ts'
```
-The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument.
+The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument.
`log` API obsahuje následující funkce:
@@ -590,7 +590,7 @@ The `log` API allows subgraphs to log information to the Graph Node standard out
- `log.info(fmt: string, args: Array): void` - zaznamená informační zprávu.
- `log.warning(fmt: string, args: Array): void` - zaznamená varování.
- `log.error(fmt: string, args: Array): void` - zaznamená chybovou zprávu.
-- `log.critical(fmt: string, args: Array): void` - zaznamená kritickou zprávu _a_ ukončí podgraf.
+- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph.
`log` API přebírá formátovací řetězec a pole řetězcových hodnot. Poté nahradí zástupné symboly řetězcovými hodnotami z pole. První zástupný symbol „{}“ bude nahrazen první hodnotou v poli, druhý zástupný symbol „{}“ bude nahrazen druhou hodnotou a tak dále.
@@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId'))
V současné době je podporován pouze příznak `json`, který musí být předán souboru `ipfs.map`. S příznakem `json` se soubor IPFS musí skládat z řady hodnot JSON, jedna hodnota na řádek. Volání příkazu `ipfs.map` přečte každý řádek souboru, deserializuje jej do hodnoty `JSONValue` a pro každou z nich zavolá zpětné volání. Zpětné volání pak může použít operace entit k uložení dat z `JSONValue`. Změny entit se uloží až po úspěšném ukončení obsluhy, která volala `ipfs.map`; do té doby se uchovávají v paměti, a velikost souboru, který může `ipfs.map` zpracovat, je proto omezená.
-Při úspěchu vrátí `ipfs.map` hodnotu `void`. Pokud vyvolání zpětného volání způsobí chybu, obslužná rutina, která vyvolala `ipfs.map`, se přeruší a podgraf se označí jako neúspěšný.
+On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed.
### Crypto API
@@ -836,7 +836,7 @@ Základní třída `Entity` a podřízená třída `DataSourceContext` mají pom
### DataSourceContext v manifestu
-Sekce `context` v rámci `dataSources` umožňuje definovat páry klíč-hodnota, které jsou přístupné v rámci mapování podgrafů. Dostupné typy jsou `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List` a `BigInt`.
+The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`.
Zde je příklad YAML ilustrující použití různých typů v sekci `context`:
@@ -887,4 +887,4 @@ dataSources:
- `Seznam`: Určuje seznam položek. U každé položky je třeba zadat její typ a data.
- `BigInt`: Určuje velkou celočíselnou hodnotu. Kvůli velké velikosti musí být uvedena v uvozovkách.
-Tento kontext je pak přístupný v souborech mapování podgrafů, což umožňuje vytvářet dynamičtější a konfigurovatelnější podgrafy.
+This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs.
diff --git a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/common-issues.mdx
index 79ec3df1a827..419f698e68e4 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/common-issues.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/common-issues.mdx
@@ -2,7 +2,7 @@
title: Běžné problémy se AssemblyScript
---
-Při vývoji podgrafů se často vyskytují určité problémy [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Jejich obtížnost při ladění je různá, nicméně jejich znalost může pomoci. Následuje neúplný seznam těchto problémů:
+There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues:
- `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object.
- Rozsah se nedědí do [uzavíracích funkcí](https://www.assemblyscript.org/status.html#on-closures), tj. proměnné deklarované mimo uzavírací funkce nelze použít. Vysvětlení v [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s).
diff --git a/website/src/pages/cs/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/cs/subgraphs/developing/creating/install-the-cli.mdx
index dbeac0c137a5..536b416c9465 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/install-the-cli.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/install-the-cli.mdx
@@ -2,11 +2,11 @@
title: Instalace Graf CLI
---
-> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/).
+> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/).
## Přehled
-The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network.
+The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network.
## Začínáme
@@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest
yarn global add @graphprotocol/graph-cli
```
-The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started.
+The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started.
## Vytvoření podgrafu
### Ze stávající smlouvy
-The following command creates a subgraph that indexes all events of an existing contract:
+The following command creates a Subgraph that indexes all events of an existing contract:
```sh
graph init \
@@ -51,25 +51,25 @@ graph init \
- If any of the optional arguments are missing, it guides you through an interactive form.
-- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page.
+- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page.
### Z příkladu podgrafu
-The following command initializes a new project from an example subgraph:
+The following command initializes a new project from an example Subgraph:
```sh
graph init --from-example=example-subgraph
```
-- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated.
+- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated.
-- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events.
+- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events.
### Add New `dataSources` to an Existing Subgraph
-`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them.
+`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them.
-Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command:
+Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command:
```sh
graph add []
@@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is
Soubor(y) ABI se musí shodovat s vaší smlouvou. Soubory ABI lze získat několika způsoby:
- Pokud vytváříte vlastní projekt, budete mít pravděpodobně přístup k nejaktuálnějším ABI.
-- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile.
-- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail.
-
-## SpecVersion Releases
-
-| Verze | Poznámky vydání |
-| :-: | --- |
-| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
-| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
-| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs |
-| 0.0.9 | Supports `endBlock` feature |
-| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
-| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
-| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
-| 0.0.5 | Added support for event handlers having access to transaction receipts. |
-| 0.0.4 | Added support for managing subgraph features. |
+- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile.
+- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail.
diff --git a/website/src/pages/cs/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/cs/subgraphs/developing/creating/ql-schema.mdx
index c0a99bb516eb..ddc97aeed9e9 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/ql-schema.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/ql-schema.mdx
@@ -4,7 +4,7 @@ title: The Graph QL Schema
## Přehled
-The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language.
+The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language.
> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section.
@@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar
Before defining entities, it is important to take a step back and think about how your data is structured and linked.
-- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform.
+- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform.
- It may be useful to imagine entities as "objects containing data", rather than as events or functions.
- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type.
- Each type that should be an entity is required to be annotated with an `@entity` directive.
@@ -141,7 +141,7 @@ type TokenBalance @entity {
Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived.
-U vztahů typu "jeden k mnoha" by měl být vztah vždy uložen na straně "jeden" a strana "mnoho" by měla být vždy odvozena. Uložení vztahu tímto způsobem namísto uložení pole entit na straně "mnoho" povede k výrazně lepšímu výkonu jak při indexování, tak při dotazování na podgraf. Obecně platí, že ukládání polí entit je třeba se vyhnout, pokud je to praktické.
+For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical.
#### Příklad
@@ -160,7 +160,7 @@ type TokenBalance @entity {
}
```
-Here is an example of how to write a mapping for a subgraph with reverse lookups:
+Here is an example of how to write a mapping for a Subgraph with reverse lookups:
```typescript
let token = new Token(event.address) // Create Token
@@ -231,7 +231,7 @@ query usersWithOrganizations {
}
```
-Tento propracovanější způsob ukládání vztahů mnoho-více vede k menšímu množství dat uložených pro podgraf, a tedy k podgrafu, který je často výrazně rychlejší při indexování a dotazování.
+This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query.
### Přidání komentářů do schématu
@@ -287,7 +287,7 @@ query {
}
```
-> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest.
+> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest.
## Podporované jazyky
diff --git a/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx
index 436b407a19ba..a0fcb52875ca 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx
@@ -4,20 +4,32 @@ title: Starting Your Subgraph
## Přehled
-The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs.
+The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs.
-When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL.
+When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL.
-Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs.
+Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs.
### Start Building
-Start the process and build a subgraph that matches your needs:
+Start the process and build a Subgraph that matches your needs:
1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure
-2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component
+2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component
3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema
4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings
-5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features
+5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features
Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/).
+
+| Verze | Poznámky vydání |
+| :-: | --- |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
+| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/cs/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/cs/subgraphs/developing/creating/subgraph-manifest.mdx
index a434110b4282..6b5bae4680cd 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/subgraph-manifest.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/subgraph-manifest.mdx
@@ -4,19 +4,19 @@ title: Subgraph Manifest
## Přehled
-The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query.
+The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query.
-The **subgraph definition** consists of the following files:
+The **Subgraph definition** consists of the following files:
-- `subgraph.yaml`: Contains the subgraph manifest
+- `subgraph.yaml`: Contains the Subgraph manifest
-- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL
+- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL
- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide)
### Subgraph Capabilities
-A single subgraph can:
+A single Subgraph can:
- Index data from multiple smart contracts (but not multiple networks).
@@ -24,12 +24,12 @@ A single subgraph can:
- Add an entry for each contract that requires indexing to the `dataSources` array.
-The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md).
+The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md).
-For the example subgraph listed above, `subgraph.yaml` is:
+For the example Subgraph listed above, `subgraph.yaml` is:
```yaml
-specVersion: 0.0.4
+specVersion: 1.3.0
description: Gravatar for Ethereum
repository: https://github.com/graphprotocol/graph-tooling
schema:
@@ -54,7 +54,7 @@ dataSources:
data: 'bar'
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Gravatar
@@ -79,47 +79,47 @@ dataSources:
## Subgraph Entries
-> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/).
+> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/).
Důležité položky, které je třeba v manifestu aktualizovat, jsou:
-- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases.
+- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases.
-- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio.
+- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio.
-- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer.
+- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer.
- `features`: a list of all used [feature](#experimental-features) names.
-- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section.
+- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section.
-- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts.
+- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts.
- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created.
- `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`.
-- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development.
+- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development.
- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file.
- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings.
-- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store.
+- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store.
-- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store.
+- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store.
-- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract.
+- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract.
-A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array.
+A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array.
## Event Handlers
-Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic.
+Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic.
### Defining an Event Handler
-An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected.
+An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected.
```yaml
dataSources:
@@ -131,7 +131,7 @@ dataSources:
abi: Gravity
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Gravatar
@@ -149,11 +149,11 @@ dataSources:
## Zpracovatelé hovorů
-While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured.
+While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured.
Obsluhy volání se spustí pouze v jednom ze dvou případů: když je zadaná funkce volána jiným účtem než samotnou smlouvou nebo když je v Solidity označena jako externí a volána jako součást jiné funkce ve stejné smlouvě.
-> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network.
+> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network.
### Definice obsluhy volání
@@ -169,7 +169,7 @@ dataSources:
abi: Gravity
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Gravatar
@@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han
### Funkce mapování
-Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument:
+Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument:
```typescript
import { CreateGravatarCall } from '../generated/Gravity/Gravity'
@@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a
## Obsluha bloků
-Kromě přihlášení k událostem smlouvy nebo volání funkcí může podgraf chtít aktualizovat svá data, když jsou do řetězce přidány nové bloky. Za tímto účelem může podgraf spustit funkci po každém bloku nebo po blocích, které odpovídají předem definovanému filtru.
+In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter.
### Podporované filtry
@@ -218,7 +218,7 @@ filter:
_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._
-> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing.
+> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing.
Protože pro obsluhu bloku neexistuje žádný filtr, zajistí, že obsluha bude volána každý blok. Zdroj dat může obsahovat pouze jednu blokovou obsluhu pro každý typ filtru.
@@ -232,7 +232,7 @@ dataSources:
abi: Gravity
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Gravatar
@@ -261,7 +261,7 @@ blockHandlers:
every: 10
```
-The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals.
+The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals.
#### Jednou Filtr
@@ -276,7 +276,7 @@ blockHandlers:
kind: once
```
-Definovaný obslužná rutina s filtrem once bude zavolána pouze jednou před spuštěním všech ostatních rutin. Tato konfigurace umožňuje, aby podgraf používal obslužný program jako inicializační obslužný, který provádí specifické úlohy na začátku indexování.
+The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing.
```ts
export function handleOnce(block: ethereum.Block): void {
@@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void {
### Funkce mapování
-The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities.
+The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities.
```typescript
import { ethereum } from '@graphprotocol/graph-ts'
@@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de
Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them.
-To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false.
+To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false.
```yaml
eventHandlers:
@@ -360,7 +360,7 @@ dataSources:
abi: Factory
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/mappings/factory.ts
entities:
@@ -390,7 +390,7 @@ templates:
abi: Exchange
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/mappings/exchange.ts
entities:
@@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ
## Výchozí bloky
-The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created.
+The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created.
```yaml
dataSources:
@@ -467,7 +467,7 @@ dataSources:
startBlock: 6627917
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/mappings/factory.ts
entities:
@@ -488,13 +488,13 @@ dataSources:
## Tipy indexátor
-The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning.
+The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning.
> This feature is available from `specVersion: 1.0.0`
### Prořezávat
-`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include:
+`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include:
1. `"never"`: No pruning of historical data; retains the entire history.
2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance.
@@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde
prune: auto
```
-> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities.
+> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities.
History as of a given block is required for:
-- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history
-- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block
-- Rewinding the subgraph back to that block
+- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history
+- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block
+- Rewinding the Subgraph back to that block
If historical data as of the block has been pruned, the above capabilities will not be available.
> Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data.
-For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings:
+For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings:
Uchování určitého množství historických dat:
@@ -532,3 +532,18 @@ Zachování kompletní historie entitních států:
indexerHints:
prune: never
```
+
+## SpecVersion Releases
+
+| Verze | Poznámky vydání |
+| :-: | --- |
+| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
+| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/cs/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/cs/subgraphs/developing/creating/unit-testing-framework.mdx
index fd0130dd672a..691624b81344 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/unit-testing-framework.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/unit-testing-framework.mdx
@@ -2,12 +2,12 @@
title: Rámec pro testování jednotek
---
-Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs.
+Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs.
## Benefits of Using Matchstick
- It's written in Rust and optimized for high performance.
-- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more.
+- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more.
## Začínáme
@@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra
### Using Matchstick
-To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified).
+To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified).
### Možnosti CLI
@@ -113,7 +113,7 @@ graph test path/to/file.test.ts
```sh
-c, --coverage Run the tests in coverage mode
--d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph)
+-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph)
-f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image.
-h, --help Show usage information
-l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes)
@@ -145,17 +145,17 @@ libsFolder: path/to/libs
manifestPath: path/to/subgraph.yaml
```
-### Ukázkový podgraf
+### Demo Subgraph
You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph)
### Videonávody
-Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h)
+Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h)
## Tests structure
-_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_
+_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_
### describe()
@@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im
A je to tady - vytvořili jsme první test! 👏
-Pro spuštění našich testů nyní stačí v kořenové složce podgrafu spustit následující příkaz:
+Now in order to run our tests you simply need to run the following in your Subgraph root folder:
`graph test Gravity`
@@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri
Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file.
-NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow:
+NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow:
`.test.ts` file:
@@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index'
import { ipfs } from '@graphprotocol/graph-ts'
import { gravatarFromIpfs } from './utils'
-// Export ipfs.map() callback in order for matchstck to detect it
+// Export ipfs.map() callback in order for matchstick to detect it
export { processGravatar } from './utils'
test('ipfs.cat', () => {
@@ -1172,7 +1172,7 @@ templates:
network: mainnet
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/token-lock-wallet.ts
handler: handleMetadata
@@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => {
## Pokrytí test
-Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests.
+Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests.
The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked.
@@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as
## Další zdroje
-For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_).
+For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_).
## Zpětná vazba
diff --git a/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx
index 77f05e1ad499..e9848601ebc7 100644
--- a/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx
+++ b/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx
@@ -1,12 +1,13 @@
---
title: Deploying a Subgraph to Multiple Networks
+sidebarTitle: Deploying to Multiple Networks
---
-This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/).
+This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/).
-## Nasazení podgrafu do více sítí
+## Deploying the Subgraph to multiple networks
-V některých případech budete chtít nasadit stejný podgraf do více sítí, aniž byste museli duplikovat celý jeho kód. Hlavním problémem, který s tím souvisí, je skutečnost, že smluvní adresy v těchto sítích jsou různé.
+In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different.
### Using `graph-cli`
@@ -20,7 +21,7 @@ Options:
--network-file Networks config file path (default: "./networks.json")
```
-You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development.
+You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development.
> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks.
@@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit
> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option.
-Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`:
+Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`:
```yaml
# ...
@@ -96,7 +97,7 @@ yarn build --network sepolia
yarn build --network sepolia --network-file path/to/config
```
-The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this:
+The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this:
```yaml
# ...
@@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config
One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/).
-To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network:
+To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network:
```json
{
@@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional
}
```
-To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands:
+To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands:
```sh
# Mainnet:
@@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e
**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well.
-This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error.
+This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error.
-## Zásady archivace subgrafů Subgraph Studio
+## Subgraph Studio Subgraph archive policy
-A subgraph version in Studio is archived if and only if it meets the following criteria:
+A Subgraph version in Studio is archived if and only if it meets the following criteria:
- The version is not published to the network (or pending publish)
- The version was created 45 or more days ago
-- The subgraph hasn't been queried in 30 days
+- The Subgraph hasn't been queried in 30 days
-In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived.
+In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived.
-Každý podgraf ovlivněný touto zásadou má možnost vrátit danou verzi zpět.
+Every Subgraph affected with this policy has an option to bring the version in question back.
-## Kontrola stavu podgrafů
+## Checking Subgraph health
-Pokud se podgraf úspěšně synchronizuje, je to dobré znamení, že bude dobře fungovat navždy. Nové spouštěče v síti však mohou způsobit, že se podgraf dostane do neověřeného chybového stavu, nebo může začít zaostávat kvůli problémům s výkonem či operátory uzlů.
+If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators.
-Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph:
+Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
```graphql
{
@@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of
}
```
-This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error.
+This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error.
diff --git a/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx
index 7c53f174237a..14be0175123c 100644
--- a/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx
+++ b/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx
@@ -2,23 +2,23 @@
title: Deploying Using Subgraph Studio
---
-Learn how to deploy your subgraph to Subgraph Studio.
+Learn how to deploy your Subgraph to Subgraph Studio.
-> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain.
+> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain.
## Subgraph Studio Overview
In [Subgraph Studio](https://thegraph.com/studio/), you can do the following:
-- View a list of subgraphs you've created
-- Manage, view details, and visualize the status of a specific subgraph
-- Vytváření a správa klíčů API pro konkrétní podgrafy
+- View a list of Subgraphs you've created
+- Manage, view details, and visualize the status of a specific Subgraph
+- Create and manage your API keys for specific Subgraphs
- Restrict your API keys to specific domains and allow only certain Indexers to query with them
-- Create your subgraph
-- Deploy your subgraph using The Graph CLI
-- Test your subgraph in the playground environment
-- Integrate your subgraph in staging using the development query URL
-- Publish your subgraph to The Graph Network
+- Create your Subgraph
+- Deploy your Subgraph using The Graph CLI
+- Test your Subgraph in the playground environment
+- Integrate your Subgraph in staging using the development query URL
+- Publish your Subgraph to The Graph Network
- Manage your billing
## Install The Graph CLI
@@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli
1. Open [Subgraph Studio](https://thegraph.com/studio/).
2. Connect your wallet to sign in.
- You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe.
-3. After you sign in, your unique deploy key will be displayed on your subgraph details page.
- - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised.
+3. After you sign in, your unique deploy key will be displayed on your Subgraph details page.
+ - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised.
-> Important: You need an API key to query subgraphs
+> Important: You need an API key to query Subgraphs
### Jak vytvořit podgraf v Podgraf Studio
@@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli
### Kompatibilita podgrafů se sítí grafů
-Aby mohly být podgrafy podporovány indexátory v síti grafů, musí:
-
-- Index a [supported network](/supported-networks/)
-- Nesmí používat žádnou z následujících funkcí:
- - ipfs.cat & ipfs.map
- - Nefatální
- - Roubování
+To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo.
## Initialize Your Subgraph
-Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command:
+Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command:
```bash
graph init
```
-You can find the `` value on your subgraph details page in Subgraph Studio, see image below:
+You can find the `` value on your Subgraph details page in Subgraph Studio, see image below:

-After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected.
+After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected.
## Autorizace grafu
-Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page.
+Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page.
Then, use the following command to authenticate from the CLI:
@@ -91,11 +85,11 @@ graph auth
## Deploying a Subgraph
-Once you are ready, you can deploy your subgraph to Subgraph Studio.
+Once you are ready, you can deploy your Subgraph to Subgraph Studio.
-> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network.
+> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network.
-Use the following CLI command to deploy your subgraph:
+Use the following CLI command to deploy your Subgraph:
```bash
graph deploy
@@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label.
## Testing Your Subgraph
-After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready.
+After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready.
-Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph.
+Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph.
## Publish Your Subgraph
-In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
+In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
## Versioning Your Subgraph with the CLI
-If you want to update your subgraph, you can do the following:
+If you want to update your Subgraph, you can do the following:
- You can deploy a new version to Studio using the CLI (it will only be private at this point).
- Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer).
-- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index.
+- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index.
-You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment.
+You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment.
-> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/).
+> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/).
## Automatická archivace verzí podgrafů
-Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio.
+Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio.
-> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived.
+> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived.

diff --git a/website/src/pages/cs/subgraphs/developing/developer-faq.mdx b/website/src/pages/cs/subgraphs/developing/developer-faq.mdx
index e07a7f06fb48..2c5d8903c4d9 100644
--- a/website/src/pages/cs/subgraphs/developing/developer-faq.mdx
+++ b/website/src/pages/cs/subgraphs/developing/developer-faq.mdx
@@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o
## Subgraph Related
-### 1. Co je to podgraf?
+### 1. What is a Subgraph?
-A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query.
+A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query.
-### 2. What is the first step to create a subgraph?
+### 2. What is the first step to create a Subgraph?
-To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/).
+To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/).
-### 3. Can I still create a subgraph if my smart contracts don't have events?
+### 3. Can I still create a Subgraph if my smart contracts don't have events?
-It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data.
+It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data.
-If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower.
+If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower.
-### 4. Mohu změnit účet GitHub přidružený k mému podgrafu?
+### 4. Can I change the GitHub account associated with my Subgraph?
-No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph.
+No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph.
-### 5. How do I update a subgraph on mainnet?
+### 5. How do I update a Subgraph on mainnet?
-You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on.
+You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on.
-### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying?
+### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying?
-Podgraf musíte znovu nasadit, ale pokud se ID podgrafu (hash IPFS) nezmění, nebude se muset synchronizovat od začátku.
+You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning.
-### 7. How do I call a contract function or access a public state variable from my subgraph mappings?
+### 7. How do I call a contract function or access a public state variable from my Subgraph mappings?
Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state).
-### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings?
+### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings?
Not currently, as mappings are written in AssemblyScript.
@@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p
### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events?
-V rámci podgrafu se události zpracovávají vždy v pořadí, v jakém se objevují v blocích, bez ohledu na to, zda se jedná o více smluv.
+Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not.
### 10. How are templates different from data sources?
-Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address.
+Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address.
Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates).
-### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`?
+### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`?
Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other.
@@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest
If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique.
-### 15. Can I delete my subgraph?
+### 15. Can I delete my Subgraph?
-Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph.
+Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph.
## Network Related
@@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul
Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks)
-### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync
+### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync
Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks)
-### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed?
+### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed?
Ano! Vyzkoušejte následující příkaz, přičemž "organization/subgraphName" nahraďte názvem organizace, pod kterou je publikován, a názvem vašeho podgrafu:
@@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... }
### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high?
-Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients.
+Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients.
## Miscellaneous
diff --git a/website/src/pages/cs/subgraphs/developing/introduction.mdx b/website/src/pages/cs/subgraphs/developing/introduction.mdx
index 110d7639aded..b040c749c6ca 100644
--- a/website/src/pages/cs/subgraphs/developing/introduction.mdx
+++ b/website/src/pages/cs/subgraphs/developing/introduction.mdx
@@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin
On The Graph, you can:
-1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/).
-2. Use GraphQL to query existing subgraphs.
+1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/).
+2. Use GraphQL to query existing Subgraphs.
### What is GraphQL?
-- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs.
+- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
### Developer Actions
-- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps.
-- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers.
-- Deploy, publish and signal your subgraphs within The Graph Network.
+- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps.
+- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers.
+- Deploy, publish and signal your Subgraphs within The Graph Network.
-### What are subgraphs?
+### What are Subgraphs?
-A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL.
+A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL.
-Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics.
+Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics.
diff --git a/website/src/pages/cs/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/managing/deleting-a-subgraph.mdx
index 77896e36a45d..b8c2330ca49d 100644
--- a/website/src/pages/cs/subgraphs/developing/managing/deleting-a-subgraph.mdx
+++ b/website/src/pages/cs/subgraphs/developing/managing/deleting-a-subgraph.mdx
@@ -2,30 +2,30 @@
title: Deleting a Subgraph
---
-Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/).
+Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/).
-> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it.
+> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it.
## Step-by-Step
-1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/).
+1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/).
2. Click on the three-dots to the right of the "publish" button.
-3. Click on the option to "delete this subgraph":
+3. Click on the option to "delete this Subgraph":

-4. Depending on the subgraph's status, you will be prompted with various options.
+4. Depending on the Subgraph's status, you will be prompted with various options.
- - If the subgraph is not published, simply click “delete” and confirm.
- - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required.
+ - If the Subgraph is not published, simply click “delete” and confirm.
+ - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required.
-> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner.
+> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner.
### Important Reminders
-- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal.
-- Kurátoři již nebudou moci signalizovat na podgrafu.
-- Curators that already signaled on the subgraph can withdraw their signal at an average share price.
-- Deleted subgraphs will show an error message.
+- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal.
+- Curators will not be able to signal on the Subgraph anymore.
+- Curators that already signaled on the Subgraph can withdraw their signal at an average share price.
+- Deleted Subgraphs will show an error message.
diff --git a/website/src/pages/cs/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/managing/transferring-a-subgraph.mdx
index 0fc6632cbc40..e80bde3fa6d2 100644
--- a/website/src/pages/cs/subgraphs/developing/managing/transferring-a-subgraph.mdx
+++ b/website/src/pages/cs/subgraphs/developing/managing/transferring-a-subgraph.mdx
@@ -2,18 +2,18 @@
title: Transferring a Subgraph
---
-Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network.
+Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network.
## Reminders
-- Whoever owns the NFT controls the subgraph.
-- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network.
-- You can easily move control of a subgraph to a multi-sig.
-- A community member can create a subgraph on behalf of a DAO.
+- Whoever owns the NFT controls the Subgraph.
+- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network.
+- You can easily move control of a Subgraph to a multi-sig.
+- A community member can create a Subgraph on behalf of a DAO.
## View Your Subgraph as an NFT
-To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**:
+To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**:
```
https://opensea.io/your-wallet-address
@@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres
## Step-by-Step
-To transfer ownership of a subgraph, do the following:
+To transfer ownership of a Subgraph, do the following:
1. Use the UI built into Subgraph Studio:

-2. Choose the address that you would like to transfer the subgraph to:
+2. Choose the address that you would like to transfer the Subgraph to:

diff --git a/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx
index ed8846e26498..29c75273aa17 100644
--- a/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx
+++ b/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx
@@ -1,10 +1,11 @@
---
title: Zveřejnění podgrafu v decentralizované síti
+sidebarTitle: Publishing to the Decentralized Network
---
-Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network.
+Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network.
-When you publish a subgraph to the decentralized network, you make it available for:
+When you publish a Subgraph to the decentralized network, you make it available for:
- [Curators](/resources/roles/curating/) to begin curating it.
- [Indexers](/indexing/overview/) to begin indexing it.
@@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/).
1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard
2. Click on the **Publish** button
-3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/).
+3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/).
-All published versions of an existing subgraph can:
+All published versions of an existing Subgraph can:
- Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/).
-- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published.
+- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published.
-### Aktualizace metadata publikovaného podgrafu
+### Updating metadata for a published Subgraph
-- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio.
+- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio.
- Once you’ve saved your changes and published the updates, they will appear in Graph Explorer.
- It's important to note that this process will not create a new version since your deployment has not changed.
## Publishing from the CLI
-As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli).
+As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli).
1. Open the `graph-cli`.
2. Use the following commands: `graph codegen && graph build` then `graph publish`.
-3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice.
+3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice.

### Customizing your deployment
-You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags:
+You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags:
```
USAGE
@@ -61,33 +62,33 @@ FLAGS
```
-## Přidání signálu do podgrafu
+## Adding signal to your Subgraph
-Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph.
+Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph.
-- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled.
+- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled.
-- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md).
+- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md).
- Specific supported networks can be checked [here](/supported-networks/).
-> Přidání signálu do podgrafu, který nemá nárok na odměny, nepřiláká další indexátory.
+> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers.
>
-> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph.
+> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph.
-The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
+The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
-When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
+When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
-Indexers can find subgraphs to index based on curation signals they see in Graph Explorer.
+Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer.
-
+
-Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published.
+Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published.

-Případně můžete přidat signál GRT do publikovaného podgrafu z Průzkumníka grafů.
+Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer.

diff --git a/website/src/pages/cs/subgraphs/developing/subgraphs.mdx b/website/src/pages/cs/subgraphs/developing/subgraphs.mdx
index f197aabdc49c..a998db9c316d 100644
--- a/website/src/pages/cs/subgraphs/developing/subgraphs.mdx
+++ b/website/src/pages/cs/subgraphs/developing/subgraphs.mdx
@@ -4,83 +4,83 @@ title: Podgrafy
## What is a Subgraph?
-A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL.
+A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL.
### Subgraph Capabilities
- **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3.
-- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/).
-- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer).
+- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/).
+- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer).
## Inside a Subgraph
-The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query.
+The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query.
-The **subgraph definition** consists of the following files:
+The **Subgraph definition** consists of the following files:
-- `subgraph.yaml`: Contains the subgraph manifest
+- `subgraph.yaml`: Contains the Subgraph manifest
-- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL
+- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL
- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema
-To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/).
+To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/).
## Životní cyklus podgrafů
-Here is a general overview of a subgraph’s lifecycle:
+Here is a general overview of a Subgraph’s lifecycle:

## Subgraph Development
-1. [Create a subgraph](/developing/creating-a-subgraph/)
-2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/)
-3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio)
-4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/)
-5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph)
+1. [Create a Subgraph](/developing/creating-a-subgraph/)
+2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/)
+3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio)
+4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/)
+5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph)
### Build locally
-Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs.
+Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs.
### Deploy to Subgraph Studio
-Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following:
+Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following:
-- Use its staging environment to index the deployed subgraph and make it available for review.
-- Verify that your subgraph doesn't have any indexing errors and works as expected.
+- Use its staging environment to index the deployed Subgraph and make it available for review.
+- Verify that your Subgraph doesn't have any indexing errors and works as expected.
### Publish to the Network
-When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network.
+When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network.
-- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers.
-- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT.
-- Published subgraphs have associated metadata, which provides other network participants with useful context and information.
+- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers.
+- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT.
+- Published Subgraphs have associated metadata, which provides other network participants with useful context and information.
### Add Curation Signal for Indexing
-Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph.
+Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph.
#### What is signal?
-- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it.
-- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume.
+- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it.
+- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume.
### Querying & Application Development
Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/).
-Learn more about [querying subgraphs](/subgraphs/querying/introduction/).
+Learn more about [querying Subgraphs](/subgraphs/querying/introduction/).
### Updating Subgraphs
-To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing.
+To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing.
-- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax.
-- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying.
+- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax.
+- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying.
### Deleting & Transferring Subgraphs
-If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/).
+If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/).
diff --git a/website/src/pages/cs/subgraphs/explorer.mdx b/website/src/pages/cs/subgraphs/explorer.mdx
index b679cdbb8c43..2d918567ee9d 100644
--- a/website/src/pages/cs/subgraphs/explorer.mdx
+++ b/website/src/pages/cs/subgraphs/explorer.mdx
@@ -2,11 +2,11 @@
title: Průzkumník grafů
---
-Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer).
+Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer).
## Přehled
-Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile.
+Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile.
## Inside Explorer
@@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi
### Subgraphs Page
-After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following:
+After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following:
-- Your own finished subgraphs
+- Your own finished Subgraphs
- Subgraphs published by others
-- The exact subgraph you want (based on the date created, signal amount, or name).
+- The exact Subgraph you want (based on the date created, signal amount, or name).

-When you click into a subgraph, you will be able to do the following:
+When you click into a Subgraph, you will be able to do the following:
- Test queries in the playground and be able to leverage network details to make informed decisions.
-- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality.
+- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality.
- - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries.
+ - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries.

-On each subgraph’s dedicated page, you can do the following:
+On each Subgraph’s dedicated page, you can do the following:
-- Signál/nesignál na podgraf
+- Signal/Un-signal on Subgraphs
- Zobrazit další podrobnosti, například grafy, ID aktuálního nasazení a další metadata
-- Přepínání verzí pro zkoumání minulých iterací podgrafu
-- Dotazování na podgrafy prostřednictvím GraphQL
-- Testování podgrafů na hřišti
-- Zobrazení indexátorů, které indexují na určitém podgrafu
+- Switch versions to explore past iterations of the Subgraph
+- Query Subgraphs via GraphQL
+- Test Subgraphs in the playground
+- View the Indexers that are indexing on a certain Subgraph
- Statistiky podgrafů (alokace, kurátoři atd.)
-- Zobrazení subjektu, který podgraf zveřejnil
+- View the entity who published the Subgraph

@@ -53,7 +53,7 @@ On this page, you can see the following:
- Indexers who collected the most query fees
- Indexers with the highest estimated APR
-Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph.
+Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph.
### Participants Page
@@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every

-Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs.
+Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs.
-In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards.
+In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
**Specifics**
@@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s
- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
-- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing.
+- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated.
- Maximální kapacita delegování - maximální množství delegovaných podílů, které může indexátor produktivně přijmout. Nadměrný delegovaný podíl nelze použít pro alokace nebo výpočty odměn.
- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time.
@@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici
#### 2. Kurátoři
-Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed.
+Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed.
-- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve.
- - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on.
+- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve.
+ - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on.
- The bonding curve incentivizes Curators to curate the highest quality data sources.
In the The Curator table listed below you can see:
@@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ
A few key details to note:
-- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers.
-- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days).
+- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers.
+- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days).

@@ -178,15 +178,15 @@ In this section, you can view the following:
### Tab Podgrafy
-In the Subgraphs tab, you’ll see your published subgraphs.
+In the Subgraphs tab, you’ll see your published Subgraphs.
-> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network.
+> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network.

### Tab Indexování
-In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer.
+In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer.
Tato část bude také obsahovat podrobnosti o vašich čistých odměnách za indexování a čistých poplatcích za dotazy. Zobrazí se následující metriky:
@@ -223,13 +223,13 @@ Nezapomeňte, že tento graf lze horizontálně posouvat, takže pokud se posune
### Tab Kurátorství
-Na kartě Kurátorství najdete všechny dílčí grafy, na které signalizujete (a které vám tak umožňují přijímat poplatky za dotazy). Signalizace umožňuje kurátorům upozornit indexátory na to, které podgrafy jsou hodnotné a důvěryhodné, a tím signalizovat, že je třeba je indexovat.
+In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
Na této tab najdete přehled:
-- Všechny dílčí podgrafy, na kterých kurátor pracuje, s podrobnostmi o signálu
-- Celkové podíly na podgraf
-- Odměny za dotaz na podgraf
+- All the Subgraphs you're curating on with signal details
+- Share totals per Subgraph
+- Query rewards per Subgraph
- Aktualizováno v detailu data

diff --git a/website/src/pages/cs/subgraphs/guides/_meta.js b/website/src/pages/cs/subgraphs/guides/_meta.js
index 37e18bc51651..a1bb04fb6d3f 100644
--- a/website/src/pages/cs/subgraphs/guides/_meta.js
+++ b/website/src/pages/cs/subgraphs/guides/_meta.js
@@ -1,4 +1,5 @@
export default {
+ 'subgraph-composition': '',
'subgraph-debug-forking': '',
near: '',
arweave: '',
diff --git a/website/src/pages/cs/subgraphs/guides/arweave.mdx b/website/src/pages/cs/subgraphs/guides/arweave.mdx
index 08e6c4257268..dff8facf77d4 100644
--- a/website/src/pages/cs/subgraphs/guides/arweave.mdx
+++ b/website/src/pages/cs/subgraphs/guides/arweave.mdx
@@ -1,50 +1,50 @@
---
-title: Building Subgraphs on Arweave
+title: Vytváření podgrafů na Arweave
---
> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs!
-In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain.
+V této příručce se dozvíte, jak vytvořit a nasadit subgrafy pro indexování blockchainu Arweave.
-## What is Arweave?
+## Co je Arweave?
-The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted.
+Protokol Arweave umožňuje vývojářům ukládat data trvale a to je hlavní rozdíl mezi Arweave a IPFS, kde IPFS tuto funkci postrádá; trvalé uložení a soubory uložené na Arweave nelze měnit ani mazat.
-Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check:
+Společnost Arweave již vytvořila řadu knihoven pro integraci protokolu do řady různých programovacích jazyků. Další informace naleznete zde:
- [Arwiki](https://arwiki.wiki/#/en/main)
- [Arweave Resources](https://www.arweave.org/build)
-## What are Arweave Subgraphs?
+## Co jsou podgrafy Arweave?
The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/).
[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet.
-## Building an Arweave Subgraph
+## Vytvoření podgrafu Arweave
-To be able to build and deploy Arweave Subgraphs, you need two packages:
+Abyste mohli sestavit a nasadit Arweave Subgraphs, potřebujete dva balíčky:
1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`.
2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`.
-## Subgraph's components
+## Komponenty podgrafu
There are three components of a Subgraph:
### 1. Manifest - `subgraph.yaml`
-Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source.
+Definuje zdroje dat, které jsou předmětem zájmu, a způsob jejich zpracování. Arweave je nový druh datového zdroje.
### 2. Schema - `schema.graphql`
-Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body.
+Zde definujete, na která data se chcete po indexování subgrafu pomocí jazyka GraphQL dotazovat. Je to vlastně podobné modelu pro API, kde model definuje strukturu těla požadavku.
The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema).
### 3. AssemblyScript Mappings - `mapping.ts`
-This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed.
+Jedná se o logiku, která určuje, jak mají být data načtena a uložena, když někdo komunikuje se zdroji dat, kterým nasloucháte. Data se přeloží a uloží na základě schématu, které jste uvedli.
During Subgraph development there are two key commands:
@@ -53,7 +53,7 @@ $ graph codegen # generates types from the schema file identified in the manifes
$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder
```
-## Subgraph Manifest Definition
+## Definice podgrafu Manifest
The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph:
@@ -84,24 +84,24 @@ dataSources:
- Arweave Subgraphs introduce a new kind of data source (`arweave`)
- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet`
-- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet
+- Zdroje dat Arweave obsahují nepovinné pole source.owner, což je veřejný klíč peněženky Arweave
-Arweave data sources support two types of handlers:
+Datové zdroje Arweave podporují dva typy zpracovatelů:
- `blockHandlers` - Run on every new Arweave block. No source.owner is required.
- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner`
-> The source.owner can be the owner's address, or their Public Key.
-
-> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users.
-
+> Source.owner může být adresa vlastníka nebo jeho veřejný klíč.
+>
+> Transakce jsou stavebními kameny permaweb Arweave a jsou to objekty vytvořené koncovými uživateli.
+>
> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet.
-## Schema Definition
+## Definice schématu
Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
-## AssemblyScript Mappings
+## AssemblyScript Mapování
The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/).
@@ -150,7 +150,7 @@ Block handlers receive a `Block`, while transactions receive a `Transaction`.
Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings).
-## Deploying an Arweave Subgraph in Subgraph Studio
+## Nasazení podgrafu Arweave v Podgraf Studio
Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command.
@@ -158,11 +158,11 @@ Once your Subgraph has been created on your Subgraph Studio dashboard, you can d
graph deploy --access-token
```
-## Querying an Arweave Subgraph
+## Dotazování podgrafu Arweave
The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information.
-## Example Subgraphs
+## Příklady podgrafů
Here is an example Subgraph for reference:
@@ -174,19 +174,19 @@ Here is an example Subgraph for reference:
No, a Subgraph can only support data sources from one chain/network.
-### Can I index the stored files on Arweave?
+### Mohu indexovat uložené soubory v Arweave?
-Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions).
+V současné době The Graph indexuje pouze Arweave jako blockchain (jeho bloky a transakce).
### Can I identify Bundlr bundles in my Subgraph?
-This is not currently supported.
+Toto není aktuálně podporováno.
-### How can I filter transactions to a specific account?
+### Jak mohu filtrovat transakce na určitý účet?
-The source.owner can be the user's public key or account address.
+Source.owner může být veřejný klíč uživatele nebo adresa účtu.
-### What is the current encryption format?
+### Jaký je aktuální formát šifrování?
Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/).
diff --git a/website/src/pages/cs/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/cs/subgraphs/guides/contract-analyzer.mdx
index 084ac8d28a00..9f53796b8066 100644
--- a/website/src/pages/cs/subgraphs/guides/contract-analyzer.mdx
+++ b/website/src/pages/cs/subgraphs/guides/contract-analyzer.mdx
@@ -2,11 +2,15 @@
title: Smart Contract Analysis with Cana CLI
---
-# Cana CLI: Quick & Efficient Contract Analysis
+Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains.
-**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains.
+## Přehled
-## 📌 Key Features
+**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more.
+
+### Key Features
+
+With Cana CLI, you can:
- Detect deployment blocks
- Verify source code
@@ -14,63 +18,75 @@ title: Smart Contract Analysis with Cana CLI
- Identify proxy and implementation contracts
- Support multiple chains
-## 🚀 Installation & Setup
+### Prerequisites
+
+Before installing Cana CLI, make sure you have:
+
+- [Node.js v16+](https://nodejs.org/en)
+- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install)
+- Block explorer API keys
+
+### Installation & Setup
-Install Cana globally using npm:
+1. Install Cana CLI
+
+Use npm to install it globally:
```bash
npm install -g contract-analyzer
```
-Set up a blockchain for analysis:
+2. Configure Cana CLI
+
+Set up a blockchain environment for analysis:
```bash
cana setup
```
-Provide the required block explorer API and block explorer endpoint URL details when prompted.
+During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL.
-Running `cana setup` creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use.
+After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use.
-## 🍳 Usage
+### Steps: Using Cana CLI for Smart Contract Analysis
-### 🔹 Chain Selection
+#### 1. Select a Chain
-Cana supports multiple EVM-compatible chains.
+Cana CLI supports multiple EVM-compatible chains.
-List chains added with:
+For a list of chains added run this command:
```bash
cana chains
```
-Then select a chain with:
+Then select a chain with this command:
```bash
cana chains --switch
```
-Once a chain is selected, all subsequent contract analases will continue on that chain.
+Once a chain is selected, all subsequent contract analyses will continue on that chain.
-### 🔹 Basic Contract Analysis
+#### 2. Basic Contract Analysis
-Analyze a contract with:
+Run the following command to analyze a contract:
```bash
cana analyze 0xContractAddress
```
-or
+nebo
```bash
cana -a 0xContractAddress
```
-This command displays essential contract information in the terminal using a clear, organized format.
+This command fetches and displays essential contract information in the terminal using a clear, organized format.
-### 🔹 Understanding Output
+#### 3. Understanding the Output
-Cana organizes results into the terminal as well as into a structured directory when detailed contract data is successfully retrieved:
+Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved:
```
contracts-analyzed/
@@ -80,24 +96,22 @@ contracts-analyzed/
└── event-information.json # Event signatures and examples
```
-### 🔹 Chain Management
+This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development.
+
+#### 4. Chain Management
Add and manage chains:
```bash
-cana setup # Add a new chain
-cana chains # List configured chains
-cana chains -s # Swich chains.
+cana setup # Add a new chain
+cana chains # List configured chains
+cana chains -s # Switch chains
```
-## ⚠️ Troubleshooting
+### Troubleshooting
-- **Missing Data**: Ensure the contract address is correct, verified on the block explorer, and that your API key has the required permissions.
+Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions.
-## ✅ Requirements
-
-- Node.js v16+
-- npm v6+
-- Block explorer API keys
+### Závěr
-Keep your contract analyses efficient and well-organized. 🚀
+With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease.
diff --git a/website/src/pages/cs/subgraphs/guides/enums.mdx b/website/src/pages/cs/subgraphs/guides/enums.mdx
index 9f55ae07c54b..7cc0e6c0ed78 100644
--- a/website/src/pages/cs/subgraphs/guides/enums.mdx
+++ b/website/src/pages/cs/subgraphs/guides/enums.mdx
@@ -269,6 +269,6 @@ Expected output includes the marketplaces that meet the criteria, each represent
}
```
-## Additional Resources
+## Další zdroje
For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums).
diff --git a/website/src/pages/cs/subgraphs/guides/grafting.mdx b/website/src/pages/cs/subgraphs/guides/grafting.mdx
index d9abe0e70d2a..a7bad43c9c1f 100644
--- a/website/src/pages/cs/subgraphs/guides/grafting.mdx
+++ b/website/src/pages/cs/subgraphs/guides/grafting.mdx
@@ -1,46 +1,46 @@
---
-title: Replace a Contract and Keep its History With Grafting
+title: Nahrazení smlouvy a zachování její historie pomocí roubování
---
In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs.
-## What is Grafting?
+## Co je to roubování?
Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch.
The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways:
-- It adds or removes entity types
-- It removes attributes from entity types
-- It adds nullable attributes to entity types
-- It turns non-nullable attributes into nullable attributes
-- It adds values to enums
-- It adds or removes interfaces
-- It changes for which entity types an interface is implemented
+- Přidává nebo odebírá typy entit
+- Odstraňuje atributy z typů entit
+- Přidává nulovatelné atributy k typům entit
+- Mění nenulovatelné atributy na nulovatelné atributy
+- Přidává hodnoty do enums
+- Přidává nebo odebírá rozhraní
+- Mění se, pro které typy entit je rozhraní implementováno
-For more information, you can check:
+Další informace naleznete na:
- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs)
In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract.
-## Important Note on Grafting When Upgrading to the Network
+## Důležité upozornění k roubování při aktualizaci na síť
> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network
-### Why Is This Important?
+### Proč je to důležité?
Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio.
-### Best Practices
+### Osvědčené postupy
**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected.
**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data.
-By adhering to these guidelines, you minimize risks and ensure a smoother migration process.
+Dodržováním těchto pokynů minimalizujete rizika a zajistíte hladší průběh migrace.
-## Building an Existing Subgraph
+## Vytvoření existujícího podgrafu
Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided:
@@ -48,7 +48,7 @@ Building Subgraphs is an essential part of The Graph, described more in depth [h
> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit).
-## Subgraph Manifest Definition
+## Definice podgrafu Manifest
The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use:
@@ -83,7 +83,7 @@ dataSources:
- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia`
- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted.
-## Grafting Manifest Definition
+## Definice manifestu roubování
Grafting requires adding two new items to the original Subgraph manifest:
@@ -101,7 +101,7 @@ graft:
The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting
-## Deploying the Base Subgraph
+## Nasazení základního podgrafu
1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example`
2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo
@@ -117,7 +117,7 @@ The `base` and `block` values can be found by deploying two Subgraphs: one for t
}
```
-It returns something like this:
+Vrátí něco takového:
```
{
@@ -140,9 +140,9 @@ It returns something like this:
Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting.
-## Deploying the Grafting Subgraph
+## Nasazení podgrafu roubování
-The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc.
+Náhradní podgraf.yaml bude mít novou adresu smlouvy. K tomu může dojít při aktualizaci dapp, novém nasazení kontraktu atd.
1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement`
2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio.
@@ -159,7 +159,7 @@ The graft replacement subgraph.yaml will have a new contract address. This could
}
```
-It should return the following:
+Měla by vrátit následující:
```
{
@@ -189,7 +189,7 @@ You can see that the `graft-replacement` Subgraph is indexing from older `graph-
Congrats! You have successfully grafted a Subgraph onto another Subgraph.
-## Additional Resources
+## Další zdroje
If you want more experience with grafting, here are a few examples for popular contracts:
diff --git a/website/src/pages/cs/subgraphs/guides/near.mdx b/website/src/pages/cs/subgraphs/guides/near.mdx
index e78a69eb7fa2..275c2aba0fd4 100644
--- a/website/src/pages/cs/subgraphs/guides/near.mdx
+++ b/website/src/pages/cs/subgraphs/guides/near.mdx
@@ -1,10 +1,10 @@
---
-title: Building Subgraphs on NEAR
+title: Vytváření podgrafů v NEAR
---
This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/).
-## What is NEAR?
+## Co je NEAR?
[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information.
@@ -14,14 +14,14 @@ The Graph gives developers tools to process blockchain events and make the resul
Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs:
-- Block handlers: these are run on every new block
-- Receipt handlers: run every time a message is executed at a specified account
+- Obsluhy bloků: jsou spouštěny při každém novém bloku.
+- Obsluhy příjmu: spouštějí se pokaždé, když je zpráva provedena na zadaném účtu.
[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt):
-> A Receipt is the only actionable object in the system. When we talk about "processing a transaction" on the NEAR platform, this eventually means "applying receipts" at some point.
+> Příjemka je jediným objektem, který lze v systému použít. Když na platformě NEAR hovoříme o "zpracování transakce", znamená to v určitém okamžiku "použití účtenky".
-## Building a NEAR Subgraph
+## Sestavení podgrafu NEAR
`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs.
@@ -46,7 +46,7 @@ $ graph codegen # generates types from the schema file identified in the manifes
$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder
```
-### Subgraph Manifest Definition
+### Definice podgrafu Manifest
The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph:
@@ -85,16 +85,16 @@ accounts:
- morning.testnet
```
-NEAR data sources support two types of handlers:
+Zdroje dat NEAR podporují dva typy zpracovatelů:
- `blockHandlers`: run on every new NEAR block. No `source.account` is required.
- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources).
-### Schema Definition
+### Definice schématu
Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
-### AssemblyScript Mappings
+### AssemblyScript Mapování
The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/).
@@ -169,7 +169,7 @@ Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/g
This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs.
-## Deploying a NEAR Subgraph
+## Nasazení podgrafu NEAR
Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released).
@@ -191,14 +191,14 @@ $ graph deploy --node --ipfs https://api.thegraph.com/ipfs/
```
-### Local Graph Node (based on default configuration)
+### Místní uzel grafu (na základě výchozí konfigurace)
```sh
graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001
@@ -216,21 +216,21 @@ Once your Subgraph has been deployed, it will be indexed by Graph Node. You can
}
```
-### Indexing NEAR with a Local Graph Node
+### Indexování NEAR pomocí místního uzlu grafu
-Running a Graph Node that indexes NEAR has the following operational requirements:
+Spuštění grafu uzlu, který indexuje NEAR, má následující provozní požadavky:
-- NEAR Indexer Framework with Firehose instrumentation
-- NEAR Firehose Component(s)
-- Graph Node with Firehose endpoint configured
+- Framework NEAR Indexer s instrumentací Firehose
+- Komponenta(y) NEAR Firehose
+- Uzel Graph s nakonfigurovaným koncovým bodem Firehose
-We will provide more information on running the above components soon.
+Brzy vám poskytneme další informace o provozu výše uvedených komponent.
-## Querying a NEAR Subgraph
+## Dotazování podgrafu NEAR
The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information.
-## Example Subgraphs
+## Příklady podgrafů
Here are some example Subgraphs for reference:
@@ -240,7 +240,7 @@ Here are some example Subgraphs for reference:
## FAQ
-### How does the beta work?
+### Jak funguje beta verze?
NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments!
@@ -250,9 +250,9 @@ No, a Subgraph can only support data sources from one chain/network.
### Can Subgraphs react to more specific triggers?
-Currently, only Block and Receipt triggers are supported. We are investigating triggers for function calls to a specified account. We are also interested in supporting event triggers, once NEAR has native event support.
+V současné době jsou podporovány pouze spouštěče Blok a Příjem. Zkoumáme spouštěče pro volání funkcí na zadaném účtu. Máme také zájem o podporu spouštěčů událostí, jakmile bude mít NEAR nativní podporu událostí.
-### Will receipt handlers trigger for accounts and their sub-accounts?
+### Budou se obsluhy příjmu spouštět pro účty a jejich podúčty?
If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts:
@@ -264,11 +264,11 @@ accounts:
### Can NEAR Subgraphs make view calls to NEAR accounts during mappings?
-This is not supported. We are evaluating whether this functionality is required for indexing.
+To není podporováno. Vyhodnocujeme, zda je tato funkce pro indexování nutná.
### Can I use data source templates in my NEAR Subgraph?
-This is not currently supported. We are evaluating whether this functionality is required for indexing.
+Tato funkce není v současné době podporována. Vyhodnocujeme, zda je tato funkce pro indexování nutná.
### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph?
@@ -278,6 +278,6 @@ Pending functionality is not yet supported for NEAR Subgraphs. In the interim, y
If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com.
-## References
+## Odkazy:
- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton)
diff --git a/website/src/pages/cs/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/cs/subgraphs/guides/secure-api-keys-nextjs.mdx
index e17e594408ff..d311cfa5117e 100644
--- a/website/src/pages/cs/subgraphs/guides/secure-api-keys-nextjs.mdx
+++ b/website/src/pages/cs/subgraphs/guides/secure-api-keys-nextjs.mdx
@@ -1,22 +1,22 @@
---
-title: How to Secure API Keys Using Next.js Server Components
+title: Jak zabezpečit klíče API pomocí komponent serveru Next.js
---
-## Overview
+## Přehled
We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key).
In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend.
-### Caveats
+### Upozornění
-- Next.js server components do not protect API keys from being drained using denial of service attacks.
-- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections.
-- Next.js server components introduce centralization risks as the server can go down.
+- Součásti serveru Next.js nechrání klíče API před odčerpáním pomocí útoků typu odepření služby.
+- Brány Graf síť mají zavedené strategie detekce a zmírňování odepření služby, avšak použití serverových komponent může tyto ochrany oslabit.
+- Server komponenty Next.js přinášejí rizika centralizace, protože může dojít k výpadku serveru.
-### Why It's Needed
+### Proč je to důležité
-In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side.
+Ve standardní aplikaci React mohou být klíče API obsažené v kódu frontendu vystaveny na straně klienta, což představuje bezpečnostní riziko. Soubory `.env` se sice běžně používají, ale plně klíče nechrání, protože kód Reactu se spouští na straně klienta a vystavuje klíč API v hlavičkách. Serverové komponenty Next.js tento problém řeší tím, že citlivé operace zpracovávají na straně serveru.
### Using client-side rendering to query a Subgraph
@@ -24,25 +24,25 @@ In a standard React application, API keys included in the frontend code can be e
### Prerequisites
-- An API key from [Subgraph Studio](https://thegraph.com/studio)
-- Basic knowledge of Next.js and React.
-- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app).
+- Klíč API od [Subgraph Studio](https://thegraph.com/studio)
+- Základní znalosti Next.js a React.
+- Existující projekt Next.js, který používá [App Router](https://nextjs.org/docs/app).
-## Step-by-Step Cookbook
+## Kuchařka krok za krokem
-### Step 1: Set Up Environment Variables
+### Krok 1: Nastavení proměnných prostředí
-1. In our Next.js project root, create a `.env.local` file.
-2. Add our API key: `API_KEY=`.
+1. V kořeni našeho projektu Next.js vytvořte soubor `.env.local`.
+2. Přidejte náš klíč API: `API_KEY=`.
-### Step 2: Create a Server Component
+### Krok 2: Vytvoření součásti serveru
-1. In our `components` directory, create a new file, `ServerComponent.js`.
-2. Use the provided example code to set up the server component.
+1. V adresáři `components` vytvořte nový soubor `ServerComponent.js`.
+2. K nastavení komponenty serveru použijte přiložený ukázkový kód.
-### Step 3: Implement Server-Side API Request
+### Krok 3: Implementace požadavku API na straně serveru
-In `ServerComponent.js`, add the following code:
+Do souboru `ServerComponent.js` přidejte následující kód:
```javascript
const API_KEY = process.env.API_KEY
@@ -95,10 +95,10 @@ export default async function ServerComponent() {
}
```
-### Step 4: Use the Server Component
+### Krok 4: Použití komponenty serveru
-1. In our page file (e.g., `pages/index.js`), import `ServerComponent`.
-2. Render the component:
+1. V našem souboru stránky (např. `pages/index.js`) importujte `ServerComponent`.
+2. Vykreslení komponenty:
```javascript
import ServerComponent from './components/ServerComponent'
@@ -112,12 +112,12 @@ export default function Home() {
}
```
-### Step 5: Run and Test Our Dapp
+### Krok 5: Spusťte a otestujte náš Dapp
-Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key.
+Spusťte naši aplikaci Next.js pomocí `npm run dev`. Ověřte, že serverová komponenta načítá data bez vystavení klíče API.

-### Conclusion
+### Závěr
By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further.
diff --git a/website/src/pages/cs/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/cs/subgraphs/guides/subgraph-composition.mdx
new file mode 100644
index 000000000000..f5480ab15a48
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/guides/subgraph-composition.mdx
@@ -0,0 +1,132 @@
+---
+title: Aggregate Data Using Subgraph Composition
+sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs
+---
+
+Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it.
+
+Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation.
+
+## Úvod
+
+Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset.
+
+### Benefits of Composition
+
+Subgraph composition is a powerful feature for scaling, allowing you to:
+
+- Reuse, mix, and combine existing data
+- Streamline development and queries
+- Use multiple data sources (up to five source Subgraphs)
+- Speed up your Subgraph's syncing speed
+- Handle errors and optimize the resync
+
+## Architecture Overview
+
+The setup for this example involves two Subgraphs:
+
+1. **Source Subgraph**: Tracks event data as entities.
+2. **Dependent Subgraph**: Uses the source Subgraph as a data source.
+
+You can find these in the `source` and `dependent` directories.
+
+- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts.
+- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers.
+
+While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature.
+
+## Prerequisites
+
+### Source Subgraphs
+
+- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs)
+- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0
+- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed
+- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of
+- Source Subgraphs cannot use grafting on top of existing entities
+- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly
+
+### Composed Subgraphs
+
+- You can only compose up to a **maximum of 5 source Subgraphs**
+- Composed Subgraphs can only use **datasources from the same chain**
+- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time
+- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly
+- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph)
+
+Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs
+
+## Začněte
+
+The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph.
+
+### Specifics
+
+- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts.
+- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality.
+- Each source Subgraph is optimized with a specific entity.
+- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance.
+
+### Step 1. Deploy Block Time Source Subgraph
+
+This first source Subgraph calculates the block time for each block.
+
+- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined.
+- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly.
+
+To deploy this Subgraph locally, run the following commands:
+
+```bash
+npm install
+npm run codegen
+npm run build
+npm run create-local
+npm run deploy-local
+```
+
+### Step 2. Deploy Block Cost Source Subgraph
+
+This second source Subgraph indexes the cost of each block.
+
+#### Key Functions
+
+- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields.
+- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly.
+
+To deploy this Subgraph locally, run the same commands as above.
+
+### Step 3. Define Block Size in Source Subgraph
+
+This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above.
+
+#### Key Functions
+
+- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size.
+- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly.
+
+### Step 4. Combine Into Block Stats Subgraph
+
+This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above.
+
+> Note:
+>
+> - Any change to a source Subgraph will likely generate a new deployment ID.
+> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes.
+> - All source Subgraphs should be deployed before the composed Subgraph is deployed.
+
+#### Key Functions
+
+- It provides a consolidated data model that encompasses all relevant block metrics.
+- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses.
+
+## Key Takeaways
+
+- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs.
+- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph.
+- This feature unlocks scalability, simplifying both development and maintenance efficiency.
+
+## Další zdroje
+
+- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph).
+- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/).
+- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations).
diff --git a/website/src/pages/cs/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/cs/subgraphs/guides/subgraph-debug-forking.mdx
index 91aa7484d2ec..60ad21d2fe95 100644
--- a/website/src/pages/cs/subgraphs/guides/subgraph-debug-forking.mdx
+++ b/website/src/pages/cs/subgraphs/guides/subgraph-debug-forking.mdx
@@ -1,22 +1,22 @@
---
-title: Quick and Easy Subgraph Debugging Using Forks
+title: Rychlé a snadné ladění podgrafů pomocí vidliček
---
As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging!
-## Ok, what is it?
+## Ok, co to je?
**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one).
In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_.
-## What?! How?
+## Co?! Jak?
When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_.
In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state.
-## Please, show me some code!
+## Ukažte mi prosím nějaký kód!
To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract.
@@ -46,31 +46,31 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void {
Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error.
-The usual way to attempt a fix is:
+Obvyklý způsob, jak se pokusit o opravu, je:
-1. Make a change in the mappings source, which you believe will solve the issue (while I know it won't).
+1. Proveďte změnu ve zdroji mapování, která podle vás problém vyřeší (zatímco já vím, že ne).
2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node).
-3. Wait for it to sync-up.
-4. If it breaks again go back to 1, otherwise: Hooray!
+3. Počkejte na synchronizaci.
+4. Pokud se opět rozbije, vraťte se na 1, jinak: Hurá!
It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._
Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks:
0. Spin-up a local Graph Node with the **_appropriate fork-base_** set.
-1. Make a change in the mappings source, which you believe will solve the issue.
+1. Proveďte změnu ve zdroji mapování, která podle vás problém vyřeší.
2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**.
-3. If it breaks again, go back to 1, otherwise: Hooray!
+3. Pokud se opět rozbije, vraťte se na 1, jinak: Hurá!
-Now, you may have 2 questions:
+Nyní můžete mít 2 otázky:
-1. fork-base what???
-2. Forking who?!
+1. fork-base co???
+2. Vidličkování kdo?!
-And I answer:
+A já odpovídám:
1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store.
-2. Forking is easy, no need to sweat:
+2. Vidličkování je snadné, není třeba se potit:
```bash
$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020
@@ -78,7 +78,7 @@ $ graph deploy --debug-fork --ipfs http://localhos
Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork!
-So, here is what I do:
+Takže to dělám takhle:
1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/).
@@ -97,5 +97,5 @@ $ cargo run -p graph-node --release -- \
$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020
```
-4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working.
+4. Zkontroluji protokoly vytvořené místním graf uzlem a hurá, zdá se, že vše funguje.
5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho)
diff --git a/website/src/pages/cs/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/cs/subgraphs/guides/subgraph-uncrashable.mdx
index a08e2a7ad8c9..bdc3671399e1 100644
--- a/website/src/pages/cs/subgraphs/guides/subgraph-uncrashable.mdx
+++ b/website/src/pages/cs/subgraphs/guides/subgraph-uncrashable.mdx
@@ -1,10 +1,10 @@
---
-title: Safe Subgraph Code Generator
+title: Generátor kódu bezpečného podgrafu
---
[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent.
-## Why integrate with Subgraph Uncrashable?
+## Proč se integrovat s aplikací Subgraph Uncrashable?
- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity.
@@ -16,11 +16,11 @@ title: Safe Subgraph Code Generator
- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification.
-- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function.
+- Framework také obsahuje způsob (prostřednictvím konfiguračního souboru), jak vytvořit vlastní, ale bezpečné funkce setteru pro skupiny proměnných entit. Tímto způsobem není možné, aby uživatel načetl/použil zastaralou entitu grafu, a také není možné zapomenout uložit nebo nastavit proměnnou, kterou funkce vyžaduje.
- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy.
-Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command.
+Podgraf Uncrashable lze spustit jako volitelný příznak pomocí příkazu Graph CLI codegen.
```sh
graph codegen -u [options] []
diff --git a/website/src/pages/cs/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/cs/subgraphs/guides/transfer-to-the-graph.mdx
index a62072c48373..510b0ea317f6 100644
--- a/website/src/pages/cs/subgraphs/guides/transfer-to-the-graph.mdx
+++ b/website/src/pages/cs/subgraphs/guides/transfer-to-the-graph.mdx
@@ -12,9 +12,9 @@ Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized n
## Upgrade Your Subgraph to The Graph in 3 Easy Steps
-1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment)
-2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio)
-3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network)
+1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment)
+2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio)
+3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network)
## 1. Set Up Your Studio Environment
@@ -31,7 +31,7 @@ You must have [Node.js](https://nodejs.org/) and a package manager of your choic
On your local machine, run the following command:
-Using [npm](https://www.npmjs.com/):
+Použitím [npm](https://www.npmjs.com/):
```sh
npm install -g @graphprotocol/graph-cli@latest
@@ -74,7 +74,7 @@ graph deploy --ipfs-hash
You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio.
-#### Example
+#### Příklad
[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari:
@@ -98,7 +98,7 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the
Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/).
-### Additional Resources
+### Další zdroje
- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/).
- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/).
diff --git a/website/src/pages/cs/subgraphs/querying/best-practices.mdx b/website/src/pages/cs/subgraphs/querying/best-practices.mdx
index a28d505b9b46..038319488eda 100644
--- a/website/src/pages/cs/subgraphs/querying/best-practices.mdx
+++ b/website/src/pages/cs/subgraphs/querying/best-practices.mdx
@@ -4,7 +4,7 @@ title: Osvědčené postupy dotazování
The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language.
-Learn the essential GraphQL language rules and best practices to optimize your subgraph.
+Learn the essential GraphQL language rules and best practices to optimize your Subgraph.
---
@@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi
However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features:
-- Manipulace s podgrafy napříč řetězci: Dotazování z více podgrafů v jednom dotazu
+- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query
- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- Plně zadaný výsledekv
@@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set `
### Use a single query to request multiple records
-By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}`
+By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}`
Example of inefficient querying:
diff --git a/website/src/pages/cs/subgraphs/querying/from-an-application.mdx b/website/src/pages/cs/subgraphs/querying/from-an-application.mdx
index b5e719983167..ef667e6b74c2 100644
--- a/website/src/pages/cs/subgraphs/querying/from-an-application.mdx
+++ b/website/src/pages/cs/subgraphs/querying/from-an-application.mdx
@@ -1,5 +1,6 @@
---
title: Dotazování z aplikace
+sidebarTitle: Querying from an App
---
Learn how to query The Graph from your application.
@@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d
### Subgraph Studio Endpoint
-After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this:
+After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this:
```
https://api.studio.thegraph.com/query///
@@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query///
### The Graph Network Endpoint
-After publishing your subgraph to the network, you will receive an endpoint that looks like this: :
+After publishing your Subgraph to the network, you will receive an endpoint that looks like this: :
```
https://gateway.thegraph.com/api//subgraphs/id/
```
-> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data.
+> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data.
## Using Popular GraphQL Clients
@@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/
The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as:
-- Manipulace s podgrafy napříč řetězci: Dotazování z více podgrafů v jednom dotazu
+- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query
- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- Plně zadaný výsledekv
@@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq
### Fetch Data with Graph Client
-Let's look at how to fetch data from a subgraph with `graph-client`:
+Let's look at how to fetch data from a Subgraph with `graph-client`:
#### Krok 1
@@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on
### Fetch Data with Apollo Client
-Let's look at how to fetch data from a subgraph with Apollo client:
+Let's look at how to fetch data from a Subgraph with Apollo client:
#### Krok 1
@@ -257,7 +258,7 @@ client
### Fetch data with URQL
-Let's look at how to fetch data from a subgraph with URQL:
+Let's look at how to fetch data from a Subgraph with URQL:
#### Krok 1
diff --git a/website/src/pages/cs/subgraphs/querying/graph-client/README.md b/website/src/pages/cs/subgraphs/querying/graph-client/README.md
index 416cadc13c6f..5dc2cfc408de 100644
--- a/website/src/pages/cs/subgraphs/querying/graph-client/README.md
+++ b/website/src/pages/cs/subgraphs/querying/graph-client/README.md
@@ -16,23 +16,23 @@ This library is intended to simplify the network aspect of data consumption for
| Status | Feature | Notes |
| :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
-| ✅ | Multiple indexers | based on fetch strategies |
-| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue |
-| ✅ | Build time validations & optimizations | |
-| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) |
-| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source |
-| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client |
-| ✅ | Local (client-side) Mutations | |
-| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) |
-| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit |
-| ✅ | Integration with `@apollo/client` | |
-| ✅ | Integration with `urql` | |
-| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` |
-| ✅ | [`@live` queries](./live.md) | Based on polling |
+| ✅ | Multiple indexers | based on fetch strategies |
+| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue |
+| ✅ | Build time validations & optimizations | |
+| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) |
+| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source |
+| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client |
+| ✅ | Local (client-side) Mutations | |
+| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) |
+| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit |
+| ✅ | Integration with `@apollo/client` | |
+| ✅ | Integration with `urql` | |
+| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` |
+| ✅ | [`@live` queries](./live.md) | Based on polling |
> You can find an [extended architecture design here](./architecture.md)
-## Getting Started
+## Začínáme
You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client:
@@ -138,7 +138,7 @@ graphclient serve-dev
And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳
-#### Examples
+#### Příklady
You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples:
@@ -308,8 +308,8 @@ sources:
`highestValue`
-
- This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated.
+
+This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated.
This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources.
diff --git a/website/src/pages/cs/subgraphs/querying/graph-client/live.md b/website/src/pages/cs/subgraphs/querying/graph-client/live.md
index e6f726cb4352..0e3b535bd5d6 100644
--- a/website/src/pages/cs/subgraphs/querying/graph-client/live.md
+++ b/website/src/pages/cs/subgraphs/querying/graph-client/live.md
@@ -2,7 +2,7 @@
Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data.
-## Getting Started
+## Začínáme
Start by adding the following configuration to your `.graphclientrc.yml` file:
diff --git a/website/src/pages/cs/subgraphs/querying/graphql-api.mdx b/website/src/pages/cs/subgraphs/querying/graphql-api.mdx
index f0cc9b78b338..e5dc52ccce1f 100644
--- a/website/src/pages/cs/subgraphs/querying/graphql-api.mdx
+++ b/website/src/pages/cs/subgraphs/querying/graphql-api.mdx
@@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph.
## What is GraphQL?
-[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs.
+[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
-To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/).
+To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/).
## Queries with GraphQL
-In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type.
+In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type.
> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph.
@@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison:
You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`.
-To může být užitečné, pokud chcete načíst pouze entity, které se změnily například od posledního dotazování. Nebo může být užitečná pro zkoumání nebo ladění změn entit v podgrafu (v kombinaci s blokovým filtrem můžete izolovat pouze entity, které se změnily v určitém bloku).
+This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block).
```graphql
{
@@ -329,7 +329,7 @@ This query will return `Challenge` entities, and their associated `Application`
### Fulltextové Vyhledávání dotazy
-Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph.
+Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph.
Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field.
@@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021
The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System).
-GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph).
+GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph).
> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications.
@@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en
### Metadata podgrafů
-All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows:
+All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows:
```graphQL
{
@@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s
}
```
-Pokud je uveden blok, metadata se vztahují k tomuto bloku, pokud ne, použije se poslední indexovaný blok. Pokud je blok uveden, musí se nacházet za počátečním blokem podgrafu a musí být menší nebo roven poslednímu Indevovaný bloku.
+If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block.
`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file.
@@ -427,6 +427,6 @@ Pokud je uveden blok, metadata se vztahují k tomuto bloku, pokud ne, použije s
- hash: hash bloku
- číslo: číslo bloku
-- timestamp: časové razítko bloku, pokud je k dispozici (v současné době je k dispozici pouze pro podgrafy indexující sítě EVM)
+- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks)
-`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block
+`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block
diff --git a/website/src/pages/cs/subgraphs/querying/introduction.mdx b/website/src/pages/cs/subgraphs/querying/introduction.mdx
index 19ecde83f4a8..6169df767051 100644
--- a/website/src/pages/cs/subgraphs/querying/introduction.mdx
+++ b/website/src/pages/cs/subgraphs/querying/introduction.mdx
@@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex
## Přehled
-When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph.
+When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph.
## Specifics
-Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner.
+Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner.

@@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an
Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/).
-> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities.
+> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities.
>
> Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead.
diff --git a/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx
index 0f5721e5cbcb..f2954c5593c0 100644
--- a/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx
+++ b/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx
@@ -1,14 +1,14 @@
---
-title: Správa klíčů API
+title: Managing API keys
---
## Přehled
-API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application.
+API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application.
### Create and Manage API Keys
-Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs.
+Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs.
The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
@@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page:
- Výše vynaložených GRT
2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can:
- Zobrazení a správa názvů domén oprávněných používat váš klíč API
- - Přiřazení podgrafů, na které se lze dotazovat pomocí klíče API
+ - Assign Subgraphs that can be queried with your API key
diff --git a/website/src/pages/cs/subgraphs/querying/python.mdx b/website/src/pages/cs/subgraphs/querying/python.mdx
index 669e95c19183..51e3b966a2b5 100644
--- a/website/src/pages/cs/subgraphs/querying/python.mdx
+++ b/website/src/pages/cs/subgraphs/querying/python.mdx
@@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds
sidebarTitle: Python (Subgrounds)
---
-Subgrounds je intuitivní knihovna Pythonu pro dotazování na podgrafy, vytvořená [Playgrounds](https://playgrounds.network/). Umožňuje přímo připojit data subgrafů k datovému prostředí Pythonu, což vám umožní používat knihovny jako [pandas](https://pandas.pydata.org/) k provádění analýzy dat!
+Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis!
Subgrounds nabízí jednoduché Pythonic API pro vytváření dotazů GraphQL, automatizuje zdlouhavé pracovní postupy, jako je stránkování, a umožňuje pokročilým uživatelům řízené transformace schémat.
@@ -17,24 +17,24 @@ pip install --upgrade subgrounds
python -m pip install --upgrade subgrounds
```
-Po instalaci můžete vyzkoušet podklady pomocí následujícího dotazu. Následující příklad uchopí podgraf pro protokol Aave v2 a dotazuje se na 5 největších trhů seřazených podle TVL (Total Value Locked), vybere jejich název a jejich TVL (v USD) a vrátí data jako pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame).
+Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame).
```python
from subgrounds import Subgrounds
sg = Subgrounds()
-# Načtení podgrafu
+# Load the Subgraph
aave_v2 = sg.load_subgraph(
"https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum")
-# Sestavte dotaz
+# Construct the query
latest_markets = aave_v2.Query.markets(
orderBy=aave_v2.Market.totalValueLockedUSD,
- orderDirection="desc",
+ orderDirection='desc',
first=5,
)
-# Vrátit dotaz do datového rámce
+# Return query to a dataframe
sg.query_df([
latest_markets.name,
latest_markets.totalValueLockedUSD,
diff --git a/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
index 7bef9e129e33..7792cb56d855 100644
--- a/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
+++ b/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
@@ -2,17 +2,17 @@
title: ID podgrafu vs. ID nasazení
---
-Podgraf je identifikován ID podgrafu a každá verze podgrafu je identifikována ID nasazení.
+A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID.
-When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph.
+When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph.
Here are some key differences between the two IDs: 
## ID nasazení
-The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
+The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
-When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published.
+When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published.
Příklad koncového bodu, který používá ID nasazení:
@@ -20,8 +20,8 @@ Příklad koncového bodu, který používá ID nasazení:
## ID podgrafu
-The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats.
+The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats.
-Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes.
+Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes.
Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW`
diff --git a/website/src/pages/cs/subgraphs/quick-start.mdx b/website/src/pages/cs/subgraphs/quick-start.mdx
index 130f699763ce..7c52d4745a83 100644
--- a/website/src/pages/cs/subgraphs/quick-start.mdx
+++ b/website/src/pages/cs/subgraphs/quick-start.mdx
@@ -2,7 +2,7 @@
title: Rychlé spuštění
---
-Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph.
+Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph.
## Prerequisites
@@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/
## How to Build a Subgraph
-### 1. Create a subgraph in Subgraph Studio
+### 1. Create a Subgraph in Subgraph Studio
Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet.
-Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys.
+Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys.
-Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name".
+Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name".
### 2. Nainstalujte Graph CLI
@@ -37,13 +37,13 @@ Použitím [yarn](https://yarnpkg.com/):
yarn global add @graphprotocol/graph-cli
```
-### 3. Initialize your subgraph
+### 3. Initialize your Subgraph
-> Příkazy pro konkrétní podgraf najdete na stránce podgrafu v [Subgraph Studio](https://thegraph.com/studio/).
+> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/).
-The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events.
+The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events.
-The following command initializes your subgraph from an existing contract:
+The following command initializes your Subgraph from an existing contract:
```sh
graph init
@@ -51,42 +51,42 @@ graph init
If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI.
-When you initialize your subgraph, the CLI will ask you for the following information:
+When you initialize your Subgraph, the CLI will ask you for the following information:
-- **Protocol**: Choose the protocol your subgraph will be indexing data from.
-- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph.
-- **Directory**: Choose a directory to create your subgraph in.
-- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from.
+- **Protocol**: Choose the protocol your Subgraph will be indexing data from.
+- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph.
+- **Directory**: Choose a directory to create your Subgraph in.
+- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from.
- **Contract address**: Locate the smart contract address you’d like to query data from.
- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file.
-- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed.
+- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed.
- **Contract Name**: Input the name of your contract.
-- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event.
+- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event.
- **Add another contract** (optional): You can add another contract.
-Na následujícím snímku najdete příklad toho, co můžete očekávat při inicializaci podgrafu:
+See the following screenshot for an example for what to expect when initializing your Subgraph:

-### 4. Edit your subgraph
+### 4. Edit your Subgraph
-The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph.
+The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph.
-When making changes to the subgraph, you will mainly work with three files:
+When making changes to the Subgraph, you will mainly work with three files:
-- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index.
-- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph.
+- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index.
+- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph.
- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema.
-For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/).
+For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/).
-### 5. Deploy your subgraph
+### 5. Deploy your Subgraph
> Remember, deploying is not the same as publishing.
-When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
+When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
-Jakmile je podgraf napsán, spusťte následující příkazy:
+Once your Subgraph is written, run the following commands:
````
```sh
@@ -94,7 +94,7 @@ graph codegen && graph build
```
````
-Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio.
+Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio.

@@ -109,37 +109,37 @@ graph deploy
The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`.
-### 6. Review your subgraph
+### 6. Review your Subgraph
-If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following:
+If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following:
- Run a sample query.
-- Analyze your subgraph in the dashboard to check information.
-- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this:
+- Analyze your Subgraph in the dashboard to check information.
+- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this:

-### 7. Publish your subgraph to The Graph Network
+### 7. Publish your Subgraph to The Graph Network
-When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following:
+When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following:
-- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network.
-- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/).
-- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it.
+- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network.
+- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/).
+- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it.
-> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph.
+> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph.
#### Publishing with Subgraph Studio
-To publish your subgraph, click the Publish button in the dashboard.
+To publish your Subgraph, click the Publish button in the dashboard.
-
+
-Select the network to which you would like to publish your subgraph.
+Select the network to which you would like to publish your Subgraph.
#### Publishing from the CLI
-As of version 0.73.0, you can also publish your subgraph with the Graph CLI.
+As of version 0.73.0, you can also publish your Subgraph with the Graph CLI.
Open the `graph-cli`.
@@ -157,32 +157,32 @@ graph publish
```
````
-3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice.
+3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice.

To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
-#### Přidání signálu do podgrafu
+#### Adding signal to your Subgraph
-1. To attract Indexers to query your subgraph, you should add GRT curation signal to it.
+1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it.
- - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph.
+ - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph.
2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount.
- - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks.
+ - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks.
To learn more about curation, read [Curating](/resources/roles/curating/).
-To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option:
+To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option:

-### 8. Query your subgraph
+### 8. Query your Subgraph
-You now have access to 100,000 free queries per month with your subgraph on The Graph Network!
+You now have access to 100,000 free queries per month with your Subgraph on The Graph Network!
-You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button.
+You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button.
-For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/).
+For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/).
diff --git a/website/src/pages/cs/substreams/developing/dev-container.mdx b/website/src/pages/cs/substreams/developing/dev-container.mdx
index bd4acf16eec7..339ddb159c87 100644
--- a/website/src/pages/cs/substreams/developing/dev-container.mdx
+++ b/website/src/pages/cs/substreams/developing/dev-container.mdx
@@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container.
It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file).
-Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling.
+Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling.
## Prerequisites
@@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea
You can configure your project to query data either through a Subgraph or directly from an SQL database:
-- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph).
+- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph).
- **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink).
## Deployment Options
diff --git a/website/src/pages/cs/substreams/developing/sinks.mdx b/website/src/pages/cs/substreams/developing/sinks.mdx
index f87e46464532..d89161878fc9 100644
--- a/website/src/pages/cs/substreams/developing/sinks.mdx
+++ b/website/src/pages/cs/substreams/developing/sinks.mdx
@@ -1,5 +1,5 @@
---
-title: Official Sinks
+title: Sink your Substreams
---
Choose a sink that meets your project's needs.
@@ -8,7 +8,7 @@ Choose a sink that meets your project's needs.
Once you find a package that fits your needs, you can choose how you want to consume the data.
-Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph.
+Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph.
## Sinks
diff --git a/website/src/pages/cs/substreams/developing/solana/account-changes.mdx b/website/src/pages/cs/substreams/developing/solana/account-changes.mdx
index 8c309bbcce31..98da6949aef4 100644
--- a/website/src/pages/cs/substreams/developing/solana/account-changes.mdx
+++ b/website/src/pages/cs/substreams/developing/solana/account-changes.mdx
@@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu
> NOTE: History for the Solana Account Changes dates as of 2025, block 310629601.
-For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes).
+For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes).
> NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`.
diff --git a/website/src/pages/cs/substreams/developing/solana/transactions.mdx b/website/src/pages/cs/substreams/developing/solana/transactions.mdx
index a50984178cd8..a5415dcfd8e4 100644
--- a/website/src/pages/cs/substreams/developing/solana/transactions.mdx
+++ b/website/src/pages/cs/substreams/developing/solana/transactions.mdx
@@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi
## Step 3: Load the Data
-To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink.
+To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink.
### Podgrafy
1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions.
-2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`.
+2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`.
3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`.
### SQL
diff --git a/website/src/pages/cs/substreams/introduction.mdx b/website/src/pages/cs/substreams/introduction.mdx
index 57d215576f60..d68760ad1432 100644
--- a/website/src/pages/cs/substreams/introduction.mdx
+++ b/website/src/pages/cs/substreams/introduction.mdx
@@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh
## Substreams Benefits
-- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing.
+- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing.
- **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara.
- **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections.
- **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database.
diff --git a/website/src/pages/cs/substreams/publishing.mdx b/website/src/pages/cs/substreams/publishing.mdx
index 8e71c65c2eed..19415c7860d8 100644
--- a/website/src/pages/cs/substreams/publishing.mdx
+++ b/website/src/pages/cs/substreams/publishing.mdx
@@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s
### What is a package?
-A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs.
+A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs.
## Publish a Package
@@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data

-That's it! You have succesfully published a package in the Substreams registry.
+That's it! You have successfully published a package in the Substreams registry.

diff --git a/website/src/pages/cs/supported-networks.mdx b/website/src/pages/cs/supported-networks.mdx
index 6ccb230d548f..863814948ba7 100644
--- a/website/src/pages/cs/supported-networks.mdx
+++ b/website/src/pages/cs/supported-networks.mdx
@@ -18,11 +18,11 @@ export const getStaticProps = getSupportedNetworksStaticProps
- Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints.
- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier.
-- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks.
+- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks.
- For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md).
## Running Graph Node locally
If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration.
-Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support.
+Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support.
diff --git a/website/src/pages/cs/token-api/_meta-titles.json b/website/src/pages/cs/token-api/_meta-titles.json
index 692cec84bd58..7ed31e0af95d 100644
--- a/website/src/pages/cs/token-api/_meta-titles.json
+++ b/website/src/pages/cs/token-api/_meta-titles.json
@@ -1,5 +1,6 @@
{
"mcp": "MCP",
"evm": "EVM Endpoints",
- "monitoring": "Monitoring Endpoints"
+ "monitoring": "Monitoring Endpoints",
+ "faq": "FAQ"
}
diff --git a/website/src/pages/cs/token-api/_meta.js b/website/src/pages/cs/token-api/_meta.js
index 09aa7ffc2649..0e526f673a66 100644
--- a/website/src/pages/cs/token-api/_meta.js
+++ b/website/src/pages/cs/token-api/_meta.js
@@ -5,4 +5,5 @@ export default {
mcp: titles.mcp,
evm: titles.evm,
monitoring: titles.monitoring,
+ faq: '',
}
diff --git a/website/src/pages/cs/token-api/faq.mdx b/website/src/pages/cs/token-api/faq.mdx
new file mode 100644
index 000000000000..83196959be14
--- /dev/null
+++ b/website/src/pages/cs/token-api/faq.mdx
@@ -0,0 +1,109 @@
+---
+title: Token API FAQ
+---
+
+Get fast answers to easily integrate and scale with The Graph's high-performance Token API.
+
+## Obecný
+
+### What blockchains does the Token API support?
+
+Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One.
+
+### Why isn't my API key from The Graph Market working?
+
+Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key.
+
+### How current is the data provided by the API relative to the blockchain?
+
+The API provides data up to the latest finalized block.
+
+### How do I authenticate requests to the Token API?
+
+Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
+
+### Does the Token API provide a client SDK?
+
+While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol).
+
+### Are there plans to support additional blockchains in the future?
+
+Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol).
+
+### Are there plans to offer data closer to the chain head?
+
+Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol).
+
+### Are there plans to support additional use cases such as NFTs?
+
+The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
+
+## MCP / LLM / AI Topics
+
+### Is there a time limit for LLM queries?
+
+Yes. The maximum time limit for LLM queries is 10 seconds.
+
+### Is there a known list of LLMs that work with the API?
+
+Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server.
+
+Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter).
+
+### Where can I find the MCP client?
+
+You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client).
+
+## Advanced Topics
+
+### I'm getting 403/401 errors. What's wrong?
+
+Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market.
+
+### Are there rate limits or usage costs?\*\*
+
+During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+
+### What networks are supported, and how do I specify them?
+
+You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
+
+### Why do I only see 10 results? How can I get more data?
+
+Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
+
+### How do I fetch older transfer history?
+
+The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call.
+
+### What does an empty `"data": []` array mean?
+
+An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error.
+
+### Why is the JSON response wrapped in a `"data"` array?
+
+All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`).
+
+### Why are token amounts returned as strings?
+
+Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values.
+
+### What format should addresses be in?
+
+The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address.
+
+### Do I need special headers besides authentication?
+
+While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`).
+
+### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this?
+
+For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`.
+
+### Is the Token API part of The Graph's GraphQL service?
+
+No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints.
+
+### Do I need to use MCP or tools like Claude, Cline, or Cursor?
+
+No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required.
diff --git a/website/src/pages/cs/token-api/mcp/claude.mdx b/website/src/pages/cs/token-api/mcp/claude.mdx
index 0da8f2be031d..aabd9c69d69a 100644
--- a/website/src/pages/cs/token-api/mcp/claude.mdx
+++ b/website/src/pages/cs/token-api/mcp/claude.mdx
@@ -12,7 +12,7 @@ sidebarTitle: Claude Desktop

-## Configuration
+## Konfigurace
Create or edit your `claude_desktop_config.json` file.
@@ -25,11 +25,11 @@ Create or edit your `claude_desktop_config.json` file.
```json label="claude_desktop_config.json"
{
"mcpServers": {
- "mcp-pinax": {
+ "token-api": {
"command": "npx",
"args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
"env": {
- "ACCESS_TOKEN": ""
+ "ACCESS_TOKEN": ""
}
}
}
diff --git a/website/src/pages/cs/token-api/mcp/cline.mdx b/website/src/pages/cs/token-api/mcp/cline.mdx
index ab54c0c8f6f0..2e8f478f68c1 100644
--- a/website/src/pages/cs/token-api/mcp/cline.mdx
+++ b/website/src/pages/cs/token-api/mcp/cline.mdx
@@ -10,9 +10,9 @@ sidebarTitle: Cline
- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path.
- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version.
-
+
-## Configuration
+## Konfigurace
Create or edit your `cline_mcp_settings.json` file.
diff --git a/website/src/pages/cs/token-api/mcp/cursor.mdx b/website/src/pages/cs/token-api/mcp/cursor.mdx
index 658108d1337b..fac3a1a1af73 100644
--- a/website/src/pages/cs/token-api/mcp/cursor.mdx
+++ b/website/src/pages/cs/token-api/mcp/cursor.mdx
@@ -12,7 +12,7 @@ sidebarTitle: Cursor

-## Configuration
+## Konfigurace
Create or edit your `~/.cursor/mcp.json` file.
diff --git a/website/src/pages/cs/token-api/quick-start.mdx b/website/src/pages/cs/token-api/quick-start.mdx
index 4653c3d41ac6..4083154b5a8b 100644
--- a/website/src/pages/cs/token-api/quick-start.mdx
+++ b/website/src/pages/cs/token-api/quick-start.mdx
@@ -1,6 +1,6 @@
---
title: Token API Quick Start
-sidebarTitle: Quick Start
+sidebarTitle: Rychlé spuštění
---

diff --git a/website/src/pages/de/about.mdx b/website/src/pages/de/about.mdx
index 61dbccdd5c84..30ff84ae06f0 100644
--- a/website/src/pages/de/about.mdx
+++ b/website/src/pages/de/about.mdx
@@ -30,25 +30,25 @@ Blockchain-Eigenschaften wie Endgültigkeit, Umstrukturierung der Kette und nich
## The Graph bietet eine Lösung
-The Graph löst diese Herausforderung mit einem dezentralen Protokoll, das Blockchain-Daten indiziert und eine effiziente und leistungsstarke Abfrage ermöglicht. Diese APIs (indizierte „Subgraphen“) können dann mit einer Standard-GraphQL-API abgefragt werden.
+The Graph löst diese Herausforderung mit einem dezentralen Protokoll, das die Blockchain-Daten indiziert und eine effiziente und leistungsstarke Abfrage ermöglicht. Diese APIs (indizierte „Subgraphen“) können dann mit einer Standard-GraphQL-API abgefragt werden.
Heute gibt es ein dezentralisiertes Protokoll, das durch die Open-Source-Implementierung von [Graph Node](https://github.com/graphprotocol/graph-node) unterstützt wird und diesen Prozess ermöglicht.
### Die Funktionsweise von The Graph
-Die Indizierung von Blockchain-Daten ist sehr schwierig, aber The Graph macht es einfach. The Graph lernt, wie man Ethereum-Daten mit Hilfe von Subgraphen indiziert. Subgraphs sind benutzerdefinierte APIs, die auf Blockchain-Daten aufgebaut sind. Sie extrahieren Daten aus einer Blockchain, verarbeiten sie und speichern sie so, dass sie nahtlos über GraphQL abgefragt werden können.
+Die Indexierung von Blockchain-Daten ist sehr schwierig, aber The Graph macht es einfach. The Graph lernt, wie man Ethereum-Daten mit Hilfe von Subgraphen indizieren kann. Subgraphen sind benutzerdefinierte APIs, die auf Blockchain-Daten aufgebaut sind. Sie extrahieren Daten aus einer Blockchain, verarbeiten sie und speichern sie so, dass sie nahtlos über GraphQL abgefragt werden können.
#### Besonderheiten
-- The Graph verwendet Subgraph-Beschreibungen, die als Subgraph Manifest innerhalb des Subgraphen bekannt sind.
+- The Graph verwendet Subgraph-Beschreibungen, die als Subgraph-Manifest innerhalb des Subgraphen bekannt sind.
-- Die Beschreibung des Subgraphs beschreibt die Smart Contracts, die für einen Subgraph von Interesse sind, die Ereignisse innerhalb dieser Verträge, auf die man sich konzentrieren sollte, und wie man die Ereignisdaten den Daten zuordnet, die The Graph in seiner Datenbank speichern wird.
+- Die Subgraph-Beschreibung beschreibt die Smart Contracts, die für einen Subgraphen von Interesse sind, die Ereignisse innerhalb dieser Verträge, auf die man sich konzentrieren soll, und wie man die Ereignisdaten den Daten zuordnet, die The Graph in seiner Datenbank speichern wird.
-- Wenn Sie einen Subgraphen erstellen, müssen Sie ein Subgraph Manifest schreiben.
+- Wenn Sie einen Subgraphen erstellen, müssen Sie ein Subgraphenmanifest schreiben.
-- Nachdem Sie das `Subgraph Manifest` geschrieben haben, können Sie das Graph CLI verwenden, um die Definition im IPFS zu speichern und einen Indexer anzuweisen, mit der Indizierung der Daten für diesen Subgraphen zu beginnen.
+- Nachdem Sie das `Subgraphenmanifest` geschrieben haben, können Sie das Graph CLI verwenden, um die Definition im IPFS zu speichern und einen Indexer anzuweisen, mit der Indizierung von Daten für diesen Subgraphen zu beginnen.
-Das nachstehende Diagramm enthält detailliertere Informationen über den Datenfluss, nachdem ein Subgraph Manifest mit Ethereum-Transaktionen bereitgestellt worden ist.
+Das nachstehende Diagramm enthält detailliertere Informationen über den Datenfluss, nachdem ein Subgraph-Manifest mit Ethereum-Transaktionen bereitgestellt wurde.

@@ -56,12 +56,12 @@ Der Ablauf ist wie folgt:
1. Eine Dapp fügt Ethereum durch eine Transaktion auf einem Smart Contract Daten hinzu.
2. Der Smart Contract gibt während der Verarbeitung der Transaktion ein oder mehrere Ereignisse aus.
-3. Graph Node scannt Ethereum kontinuierlich nach neuen Blöcken und den darin enthaltenen Daten für Ihren Subgraphen.
-4. Graph Node findet Ethereum-Ereignisse für Ihren Subgraphen in diesen Blöcken und führt die von Ihnen bereitgestellten Mapping-Handler aus. Das Mapping ist ein WASM-Modul, das die Dateneinheiten erstellt oder aktualisiert, die Graph Node als Reaktion auf Ethereum-Ereignisse speichert.
+3. Graph Node scannt Ethereum kontinuierlich nach neuen Blöcken und den darin enthaltenen Daten für Ihren Subgraph.
+4. Graph Node findet in diesen Blöcken Ethereum-Ereignisse für Ihren Subgraph und führt die von Ihnen bereitgestellten Mapping-Handler aus. Das Mapping ist ein WASM-Modul, das die Dateneinheiten erstellt oder aktualisiert, die Graph Node als Reaktion auf Ethereum-Ereignisse speichert.
5. Die Dapp fragt den Graph Node über den [GraphQL-Endpunkt](https://graphql.org/learn/) des Knotens nach Daten ab, die von der Blockchain indiziert wurden. Der Graph Node wiederum übersetzt die GraphQL-Abfragen in Abfragen für seinen zugrundeliegenden Datenspeicher, um diese Daten abzurufen, wobei er die Indexierungsfunktionen des Speichers nutzt. Die Dapp zeigt diese Daten in einer reichhaltigen Benutzeroberfläche für die Endnutzer an, mit der diese dann neue Transaktionen auf Ethereum durchführen können. Der Zyklus wiederholt sich.
## Nächste Schritte
-In den folgenden Abschnitten werden die Subgraphen, ihr Einsatz und die Datenabfrage eingehender behandelt.
+In den folgenden Abschnitten werden die Subgraphen, ihr Einsatz und die Datenabfrage näher erläutert.
-Bevor Sie Ihren eigenen Subgraphen schreiben, sollten Sie den [Graph Explorer](https://thegraph.com/explorer) erkunden und sich einige der bereits vorhandenen Subgraphen ansehen. Die Seite jedes Subgraphen enthält eine GraphQL- Playground, mit der Sie seine Daten abfragen können.
+Bevor Sie Ihren eigenen Subgraph schreiben, sollten Sie den [Graph Explorer](https://thegraph.com/explorer) erkunden und sich einige der bereits eingesetzten Subgraphen ansehen. Die Seite jedes Subgraphen enthält eine GraphQL-Spielwiese, mit der Sie seine Daten abfragen können.
diff --git a/website/src/pages/de/archived/_meta-titles.json b/website/src/pages/de/archived/_meta-titles.json
index 9501304a4305..68385040140c 100644
--- a/website/src/pages/de/archived/_meta-titles.json
+++ b/website/src/pages/de/archived/_meta-titles.json
@@ -1,3 +1,3 @@
{
- "arbitrum": "Scaling with Arbitrum"
+ "arbitrum": "Skalierung mit Arbitrum"
}
diff --git a/website/src/pages/de/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/de/archived/arbitrum/arbitrum-faq.mdx
index 54809f94fd9c..6fa6fbe5faaf 100644
--- a/website/src/pages/de/archived/arbitrum/arbitrum-faq.mdx
+++ b/website/src/pages/de/archived/arbitrum/arbitrum-faq.mdx
@@ -14,7 +14,7 @@ Durch die Skalierung von The Graph auf L2 können die Netzwerkteilnehmer nun von
- Von Ethereum übernommene Sicherheit
-Die Skalierung der Smart Contracts des Protokolls auf L2 ermöglicht den Netzwerkteilnehmern eine häufigere Interaktion zu geringeren Kosten in Form von Gasgebühren. So können Indexer beispielsweise häufiger Zuweisungen öffnen und schließen, um eine größere Anzahl von Subgraphen zu indexieren. Entwickler können Subgraphen leichter bereitstellen und aktualisieren, und Delegatoren können GRT häufiger delegieren. Kuratoren können einer größeren Anzahl von Subgraphen Signale hinzufügen oder entfernen - Aktionen, die bisher aufgrund der Kosten zu kostspielig waren, um sie häufig durchzuführen.
+Die Skalierung der Smart Contracts des Protokolls auf L2 ermöglicht den Netzwerkteilnehmern eine häufigere Interaktion zu geringeren Kosten in Form von Gasgebühren. So können Indexer beispielsweise häufiger Zuweisungen öffnen und schließen, um eine größere Anzahl von Subgraphen zu indexieren. Entwickler können Subgraphen leichter einsetzen und aktualisieren, und Delegatoren können GRT häufiger delegieren. Kuratoren können einer größeren Anzahl von Untergraphen Signale hinzufügen oder entfernen - Aktionen, die bisher aufgrund der Gaskosten zu kostspielig waren, um sie häufig durchzuführen.
Die The Graph-Community beschloss letztes Jahr nach dem Ergebnis der [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305)-Diskussion, mit Arbitrum weiterzumachen.
@@ -39,7 +39,7 @@ Um die Vorteile von The Graph auf L2 zu nutzen, verwenden Sie diesen Dropdown-Sc

-## Was muss ich als Entwickler von Subgraphen, Datenkonsument, Indexer, Kurator oder Delegator jetzt tun?
+## Was muss ich als Subgraph-Entwickler, Datenkonsument, Indexer, Kurator oder Delegator jetzt tun?
Netzwerk-Teilnehmer müssen zu Arbitrum wechseln, um weiterhin am The Graph Network teilnehmen zu können. Weitere Unterstützung finden Sie im [Leitfaden zum L2 Transfer Tool](/archived/arbitrum/l2-transfer-tools-guide/).
@@ -51,9 +51,9 @@ Alle Smart Contracts wurden gründlich [audited] (https://github.com/graphprotoc
Alles wurde gründlich getestet, und es gibt einen Notfallplan, um einen sicheren und nahtlosen Übergang zu gewährleisten. Einzelheiten finden Sie [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20).
-## Funktionieren die vorhandenen Subgraphen auf Ethereum?
+## Funktionieren die bestehenden Subgraphen auf Ethereum?
-Alle Subgraphen sind jetzt auf Arbitrum. Bitte lesen Sie den [Leitfaden zum L2 Transfer Tool](/archived/arbitrum/l2-transfer-tools-guide/), um sicherzustellen, dass Ihre Subgraphen reibungslos funktionieren.
+Alle Subgraphen sind jetzt auf Arbitrum. Bitte lesen Sie den [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/), um sicherzustellen, dass Ihre Subgraphen reibungslos funktionieren.
## Verfügt GRT über einen neuen Smart Contract, der auf Arbitrum eingesetzt wird?
@@ -77,4 +77,4 @@ Die Brücke wurde [umfangreich geprüft] (https://code4rena.com/contests/2022-10
Das Hinzufügen von GRT zu Ihrem Arbitrum-Abrechnungssaldo kann mit nur einem Klick in [Subgraph Studio] (https://thegraph.com/studio/) erfolgen. Sie können Ihr GRT ganz einfach mit Arbitrum verbinden und Ihre API-Schlüssel in einer einzigen Transaktion füllen.
-Visit the [Billing page](/subgraphs/billing/) for more detailed instructions on adding, withdrawing, or acquiring GRT.
+Besuchen Sie die [Abrechnungsseite](/subgraphs/billing/) für genauere Anweisungen zum Hinzufügen, Abheben oder Erwerben von GRT.
diff --git a/website/src/pages/de/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/de/archived/arbitrum/l2-transfer-tools-faq.mdx
index 8abcda305f8a..8ac2d50c81e7 100644
--- a/website/src/pages/de/archived/arbitrum/l2-transfer-tools-faq.mdx
+++ b/website/src/pages/de/archived/arbitrum/l2-transfer-tools-faq.mdx
@@ -24,19 +24,19 @@ Die Ausnahme sind Smart-Contract-Wallets wie Multisigs: Das sind Smart Contracts
Die L2-Transfer-Tools verwenden den nativen Mechanismus von Arbitrum, um Nachrichten von L1 nach L2 zu senden. Dieser Mechanismus wird "retryable ticket" genannt und wird von allen nativen Token-Bridges verwendet, einschließlich der Arbitrum GRT-Bridge. Sie können mehr über wiederholbare Tickets in den [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging) lesen.
-Wenn Sie Ihre Vermögenswerte (Subgraph, Anteil, Delegation oder Kuration) an L2 übertragen, wird eine Nachricht über die Arbitrum GRT-Brücke gesendet, die ein wiederholbares Ticket in L2 erstellt. Das Transfer-Tool beinhaltet einen gewissen ETH-Wert in der Transaktion, der verwendet wird, um 1) die Erstellung des Tickets und 2) das Gas für die Ausführung des Tickets in L2 zu bezahlen. Da jedoch die Gaspreise in der Zeit, bis das Zertifikat zur Ausführung in L2 bereit ist, schwanken können, ist es möglich, dass dieser automatische Ausführungsversuch fehlschlägt. Wenn das passiert, hält die Arbitrum-Brücke das wiederholbare Zertifikat für bis zu 7 Tage am Leben, und jeder kann versuchen, das Ticket erneut "einzulösen" (was eine Geldbörse mit etwas ETH erfordert, die mit Arbitrum verbunden ist).
+Wenn Sie Ihre Vermögenswerte (Subgraph, Anteil, Delegation oder Kuration) an L2 übertragen, wird eine Nachricht über die Arbitrum GRT-Brücke gesendet, die ein wiederholbares Ticket in L2 erstellt. Das Transfer-Tool beinhaltet einen gewissen ETH-Wert in der Transaktion, der verwendet wird, um 1) die Erstellung des Tickets und 2) das Gas für die Ausführung des Tickets in L2 zu bezahlen. Da jedoch die Gaspreise in der Zeit, bis das Ticket zur Ausführung in L2 bereit ist, schwanken können, ist es möglich, dass dieser automatische Ausführungsversuch fehlschlägt. Wenn das passiert, hält die Arbitrum-Brücke das wiederholbare Ticket für bis zu 7 Tage am Leben, und jeder kann versuchen, das Ticket erneut „einzulösen“ (was eine Wallet mit etwas ETH erfordert, die mit Arbitrum verbunden ist).
-Dies ist der so genannte "Bestätigungsschritt" in allen Übertragungswerkzeugen - er wird in den meisten Fällen automatisch ausgeführt, da die automatische Ausführung meist erfolgreich ist, aber es ist wichtig, dass Sie sich vergewissern, dass die Übertragung erfolgreich war. Wenn dies nicht gelingt und es innerhalb von 7 Tagen keine erfolgreichen Wiederholungsversuche gibt, verwirft die Arbitrum-Brücke das Ticket, und Ihre Assets (Subgraph, Pfahl, Delegation oder Kuration) gehen verloren und können nicht wiederhergestellt werden. Die Entwickler des Graph-Kerns haben ein Überwachungssystem eingerichtet, um diese Situationen zu erkennen und zu versuchen, die Tickets einzulösen, bevor es zu spät ist, aber es liegt letztendlich in Ihrer Verantwortung, sicherzustellen, dass Ihr Transfer rechtzeitig abgeschlossen wird. Wenn Sie Probleme mit der Bestätigung Ihrer Transaktion haben, wenden Sie sich bitte an [dieses Formular] (https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) und die Entwickler des Kerns werden Ihnen helfen.
+Dies ist der so genannte „Bestätigungsschritt“ in allen Übertragungswerkzeugen - er wird in den meisten Fällen automatisch ausgeführt, da die automatische Ausführung meistens erfolgreich ist, aber es ist wichtig, dass Sie sich vergewissern, dass die Übertragung erfolgreich war. Wenn dies nicht gelingt und es innerhalb von 7 Tagen keine erfolgreichen Wiederholungsversuche gibt, verwirft die Arbitrum-Brücke das Ticket, und Ihre Assets (Subgraph, Anteil, Delegation oder Kuration) gehen verloren und können nicht wiederhergestellt werden. Die Entwickler des The Graph-„ Core“ haben ein Überwachungssystem eingerichtet, um diese Situationen zu erkennen und zu versuchen, die Tickets einzulösen, bevor es zu spät ist, aber es liegt letztendlich in Ihrer Verantwortung, sicherzustellen, dass Ihr Transfer rechtzeitig abgeschlossen wird. Wenn Sie Probleme mit der Bestätigung Ihrer Transaktion haben, wenden Sie sich bitte an [dieses Formular] (https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) und die Entwickler des Kerns werden Ihnen helfen.
### Ich habe mit der Übertragung meiner Delegation/des Einsatzes/der Kuration begonnen und bin mir nicht sicher, ob sie an L2 weitergeleitet wurde. Wie kann ich bestätigen, dass sie korrekt übertragen wurde?
-If you don't see a banner on your profile asking you to finish the transfer, then it's likely the transaction made it safely to L2 and no more action is needed. If in doubt, you can check if Explorer shows your delegation, stake or curation on Arbitrum One.
+Wenn Sie in Ihrem Profil kein Banner sehen, das Sie auffordert, den Transfer abzuschließen, dann ist die Transaktion wahrscheinlich sicher auf L2 angekommen und es sind keine weiteren Maßnahmen erforderlich. Im Zweifelsfall können Sie überprüfen, ob der Explorer Ihre Delegation, Ihren Einsatz oder Ihre Kuration auf Arbitrum One anzeigt.
-If you have the L1 transaction hash (which you can find by looking at the recent transactions in your wallet), you can also confirm if the "retryable ticket" that carried the message to L2 was redeemed here: https://retryable-dashboard.arbitrum.io/ - if the auto-redeem failed, you can also connect your wallet there and redeem it. Rest assured that core devs are also monitoring for messages that get stuck, and will attempt to redeem them before they expire.
+Wenn Sie den L1-Transaktionshash haben ( den Sie durch einen Blick auf die letzten Transaktionen in Ihrer Wallet finden können), können Sie auch überprüfen, ob das „retryable ticket“, das die Nachricht nach L2 transportiert hat, hier eingelöst wurde: https://retryable-dashboard.arbitrum.io/ - wenn die automatische Einlösung fehlgeschlagen ist, können Sie Ihre Wallet auch dort verbinden und es einlösen. Seien Sie versichert, dass die Kernentwickler auch Nachrichten überwachen, die stecken bleiben, und versuchen werden, sie einzulösen, bevor sie ablaufen.
## Subgraph-Transfer
-### Wie übertrage ich meinen Subgraphen
+### Wie übertrage ich meinen Subgraphen?
@@ -48,15 +48,15 @@ Um Ihren Subgraphen zu übertragen, müssen Sie die folgenden Schritte ausführe
3. Bestätigung der Übertragung von Subgraphen auf Arbitrum\*
-4. Veröffentlichung des Subgraphen auf Arbitrum beenden
+4. Veröffentlichung von Subgraph auf Arbitrum beenden
5. Abfrage-URL aktualisieren (empfohlen)
-\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol).
+\* Beachten Sie, dass Sie die Übertragung innerhalb von 7 Tagen bestätigen müssen, da sonst Ihr Subgraph verloren gehen kann. In den meisten Fällen wird dieser Schritt automatisch ausgeführt, aber eine manuelle Bestätigung kann erforderlich sein, wenn es einen Gaspreisanstieg auf Arbitrum gibt. Sollte es während dieses Prozesses zu Problemen kommen, gibt es Ressourcen, die helfen können: kontaktiere den Support unter support@thegraph.com oder auf [Discord] (https://discord.gg/graphprotocol).
### Von wo aus soll ich meine Übertragung veranlassen?
-Sie können die Übertragung vom [Subgraph Studio] (https://thegraph.com/studio/), vom [Explorer] (https://thegraph.com/explorer) oder von einer beliebigen Subgraph-Detailseite aus starten. Klicken Sie auf die Schaltfläche "Subgraph übertragen" auf der Detailseite des Subgraphen, um die Übertragung zu starten.
+Sie können die Übertragung vom [Subgraph Studio] (https://thegraph.com/studio/), vom [Explorer,] (https://thegraph.com/explorer) oder von einer beliebigen Subgraph-Detailseite aus starten. Klicken Sie auf die Schaltfläche „Subgraph übertragen“ auf der Detailseite des Subgraphen, um die Übertragung zu starten.
### Wie lange muss ich warten, bis mein Subgraph übertragen wird?
@@ -66,35 +66,35 @@ Die Übertragungszeit beträgt etwa 20 Minuten. Die Arbitrum-Brücke arbeitet im
Ihr Subgraph ist nur in dem Netzwerk auffindbar, in dem er veröffentlicht ist. Wenn Ihr Subgraph zum Beispiel auf Arbitrum One ist, können Sie ihn nur im Explorer auf Arbitrum One finden und nicht auf Ethereum. Bitte vergewissern Sie sich, dass Sie Arbitrum One in der Netzwerkumschaltung oben auf der Seite ausgewählt haben, um sicherzustellen, dass Sie sich im richtigen Netzwerk befinden. Nach der Übertragung wird der L1-Subgraph als veraltet angezeigt.
-### Muss mein Subgraph ( Teilgraph ) veröffentlicht werden, um ihn zu übertragen?
+### Muss mein Subgraph veröffentlicht werden, um ihn zu übertragen?
-Um das Subgraph-Transfer-Tool nutzen zu können, muss Ihr Subgraph bereits im Ethereum-Mainnet veröffentlicht sein und über ein Kurationssignal verfügen, das der Wallet gehört, die den Subgraph besitzt. Wenn Ihr Subgraph nicht veröffentlicht ist, empfehlen wir Ihnen, ihn einfach direkt auf Arbitrum One zu veröffentlichen - die damit verbundenen Gasgebühren sind erheblich niedriger. Wenn Sie einen veröffentlichten Subgraphen übertragen wollen, aber das Konto des Eigentümers kein Signal darauf kuratiert hat, können Sie einen kleinen Betrag (z.B. 1 GRT) von diesem Konto signalisieren; stellen Sie sicher, dass Sie ein "auto-migrating" Signal wählen.
+Um die Vorteile des Subgraph-Transfer-Tools zu nutzen, muss Ihr Subgraph bereits im Ethereum-Mainnet veröffentlicht sein und über ein Kurationssignal verfügen, das der Wallet gehört, die den Subgraph besitzt. Wenn Ihr Subgraph noch nicht veröffentlicht ist, empfehlen wir Ihnen, ihn einfach direkt auf Arbitrum One zu veröffentlichen - die damit verbundenen Gasgebühren sind erheblich niedriger. Wenn Sie einen veröffentlichten Subgraph transferieren wollen, aber das Konto des Besitzers kein Signal darauf kuratiert hat, können Sie einen kleinen Betrag (z.B. 1 GRT) von diesem Konto signalisieren; stellen Sie sicher, dass Sie das „auto-migrating“ Signal wählen.
-### Was passiert mit der Ethereum-Mainnet-Version meines Subgraphen, nachdem ich zu Arbitrum übergehe?
+### Was passiert mit der Ethereum-Hauptnetz-Version meines Subgraphen, nachdem ich zu Arbitrum gewechselt bin?
-Nach der Übertragung Ihres Subgraphen auf Arbitrum wird die Ethereum-Hauptnetzversion veraltet sein. Wir empfehlen Ihnen, Ihre Abfrage-URL innerhalb von 48 Stunden zu aktualisieren. Es gibt jedoch eine Schonfrist, die Ihre Mainnet-URL funktionsfähig hält, so dass jede Drittanbieter-Dapp-Unterstützung aktualisiert werden kann.
+Nach dem Transfer Ihres Subgraphen zu Arbitrum wird die Ethereum-Hauptnetzversion veraltet sein. Wir empfehlen Ihnen, Ihre Abfrage-URL innerhalb von 48 Stunden zu aktualisieren. Es gibt jedoch eine Schonfrist, die Ihre Mainnet-URL funktionsfähig hält, so dass jede Drittanbieter-Dapp-Unterstützung aktualisiert werden kann.
### Muss ich nach der Übertragung auch auf Arbitrum neu veröffentlichen?
Nach Ablauf des 20-minütigen Übertragungsfensters müssen Sie die Übertragung mit einer Transaktion in der Benutzeroberfläche bestätigen, um die Übertragung abzuschließen. Ihr L1-Endpunkt wird während des Übertragungsfensters und einer Schonfrist danach weiterhin unterstützt. Es wird empfohlen, dass Sie Ihren Endpunkt aktualisieren, wenn es Ihnen passt.
-### Will my endpoint experience downtime while re-publishing?
+### Kommt es während der Neuveröffentlichung zu Ausfallzeiten an meinem Endpunkt?
-It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2.
+Es ist unwahrscheinlich, aber möglich, dass es zu einer kurzen Ausfallzeit kommt, je nachdem, welche Indexer den Subgraphen auf L1 unterstützen und ob sie ihn weiter indizieren, bis der Subgraph auf L2 vollständig unterstützt wird.
### Ist die Veröffentlichung und Versionierung auf L2 die gleiche wie im Ethereum-Mainnet?
-Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph.
+Ja. Wählen Sie Arbitrum One als Ihr veröffentlichtes Netzwerk, wenn Sie in Subgraph Studio veröffentlichen. Im Studio wird der neueste Endpunkt verfügbar sein, der auf die letzte aktualisierte Version des Subgraphen verweist.
-### Bewegt sich die Kuration meines Untergraphen ( Subgraphen ) mit meinem Untergraphen?
+### Wird die Kuration meines Subgraphen mit meinem Subgraphen umziehen?
Wenn Sie die automatische Signalmigration gewählt haben, werden 100 % Ihrer eigenen Kuration mit Ihrem Subgraphen zu Arbitrum One übertragen. Alle Kurationssignale des Subgraphen werden zum Zeitpunkt des Transfers in GRT umgewandelt, und die GRT, die Ihrem Kurationssignal entsprechen, werden zum Prägen von Signalen auf dem L2-Subgraphen verwendet.
-Andere Kuratoren können wählen, ob sie ihren Anteil an GRT zurückziehen oder ihn ebenfalls auf L2 übertragen, um das Signal auf demselben Untergraphen zu prägen.
+Andere Kuratoren können wählen, ob sie ihren Anteil an GRT zurückziehen oder ihn ebenfalls auf L2 übertragen, um das Signal auf demselben Subgraphen zu prägen.
### Kann ich meinen Subgraph nach dem Transfer zurück ins Ethereum Mainnet verschieben?
-Nach der Übertragung wird Ihre Ethereum-Mainnet-Version dieses Untergraphen veraltet sein. Wenn Sie zum Mainnet zurückkehren möchten, müssen Sie Ihre Version neu bereitstellen und zurück zum Mainnet veröffentlichen. Es wird jedoch dringend davon abgeraten, zurück ins Ethereum Mainnet zu wechseln, da die Indexierungsbelohnungen schließlich vollständig auf Arbitrum One verteilt werden.
+Nach der Übertragung wird Ihre Ethereum Mainnet-Version dieses Subgraphen veraltet sein. Wenn Sie zum Mainnet zurückkehren möchten, müssen Sie den Subgraph erneut bereitstellen und im Mainnet veröffentlichen. Es wird jedoch dringend davon abgeraten, zurück zum Ethereum Mainnet zu wechseln, da die Indexierungsbelohnungen schließlich vollständig auf Arbitrum One verteilt werden.
### Warum brauche ich überbrückte ETH, um meine Überweisung abzuschließen?
@@ -112,11 +112,11 @@ Um Ihre Delegation zu übertragen, müssen Sie die folgenden Schritte ausführen
2. 20 Minuten auf Bestätigung warten
3. Bestätigung der Delegationsübertragung auf Arbitrum
-\*\*\*\*You must confirm the transaction to complete the delegation transfer on Arbitrum. This step must be completed within 7 days or the delegation could be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol).
+\*\*\*\*Sie müssen die Transaktion bestätigen, um die Übertragung der Delegation auf Arbitrum abzuschließen. Dieser Schritt muss innerhalb von 7 Tagen abgeschlossen werden, da die Delegation sonst verloren gehen kann. In den meisten Fällen läuft dieser Schritt automatisch ab, aber eine manuelle Bestätigung kann erforderlich sein, wenn es auf Arbitrum zu einer Gaspreiserhöhung kommt. Sollte es während dieses Prozesses zu Problemen kommen, gibt es Ressourcen, die helfen können: Kontaktieren Sie den Support unter support@thegraph.com oder auf [Discord] (https://discord.gg/graphprotocol).
### Was passiert mit meinen Rewards, wenn ich einen Transfer mit einer offenen Zuteilung im Ethereum Mainnet initiiere?
-If the Indexer to whom you're delegating is still operating on L1, when you transfer to Arbitrum you will forfeit any delegation rewards from open allocations on Ethereum mainnet. This means that you will lose the rewards from, at most, the last 28-day period. If you time the transfer right after the Indexer has closed allocations you can make sure this is the least amount possible. If you have a communication channel with your Indexer(s), consider discussing with them to find the best time to do your transfer.
+Wenn der Indexer, an den Sie delegieren, noch auf L1 arbeitet, verlieren Sie beim Wechsel zu Arbitrum alle Delegationsbelohnungen aus offenen Zuteilungen im Ethereum Mainnet. Das bedeutet, dass Sie höchstens die Rewards aus dem letzten 28-Tage-Zeitraum verlieren. Wenn Sie den Transfer direkt nach der Schließung der Zuteilungen durch den Indexer durchführen, können Sie sicherstellen, dass der Betrag so gering wie möglich ist. Wenn Sie einen Kommunikationskanal mit Ihrem Indexer haben, sollten Sie mit ihm über den besten Zeitpunkt für den Transfer sprechen.
### Was passiert, wenn der Indexer, an den ich derzeit delegiere, nicht auf Arbitrum One ist?
@@ -124,7 +124,7 @@ Das L2-Transfer-Tool wird nur aktiviert, wenn der Indexer, den Sie delegiert hab
### Haben Delegatoren die Möglichkeit, an einen anderen Indexierer zu delegieren?
-If you wish to delegate to another Indexer, you can transfer to the same Indexer on Arbitrum, then undelegate and wait for the thawing period. After this, you can select another active Indexer to delegate to.
+Wenn Sie an einen anderen Indexer delegieren möchten, können Sie auf denselben Indexer auf Arbitrum übertragen, dann die Delegation aufheben und die Auftau-Phase abwarten. Danach können Sie einen anderen aktiven Indexer auswählen, an den Sie delegieren möchten.
### Was ist, wenn ich den Indexer, an den ich delegiere, auf L2 nicht finden kann?
@@ -144,53 +144,53 @@ Es wird davon ausgegangen, dass die gesamte Netzbeteiligung in Zukunft zu Arbitr
### Wie lange dauert es, bis die Übertragung meiner Delegation auf L2 abgeschlossen ist?
-A 20-minute confirmation is required for delegation transfer. Please note that after the 20-minute period, you must come back and complete step 3 of the transfer process within 7 days. If you fail to do this, then your delegation may be lost. Note that in most cases the transfer tool will complete this step for you automatically. In case of a failed auto-attempt, you will need to complete it manually. If any issues arise during this process, don't worry, we'll be here to help: contact us at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol).
+Für die Übertragung von Delegationen ist eine 20-minütige Bestätigung erforderlich. Bitte beachten Sie, dass Sie nach Ablauf der 20-Minuten-Frist innerhalb von 7 Tagen zurückkommen und Schritt 3 des Übertragungsverfahrens abschließen müssen. Wenn Sie dies versäumen, kann Ihre Delegation verloren gehen. Beachten Sie bitte, dass das Übertragungstool diesen Schritt in den meisten Fällen automatisch für Sie ausführt. Falls der automatische Versuch fehlschlägt, müssen Sie ihn manuell ausführen. Sollten während dieses Vorgangs Probleme auftreten, sind wir für Sie da: Kontaktieren Sie uns unter support@thegraph.com oder auf [Discord] (https://discord.gg/vtvv7FP).
### Kann ich meine Delegation übertragen, wenn ich eine GRT Vesting Contract/Token Lock Wallet verwende?
Ja! Der Prozess ist ein wenig anders, weil Vesting-Verträge die ETH, die für die Bezahlung des L2-Gases benötigt werden, nicht weiterleiten können, also müssen Sie sie vorher einzahlen. Wenn Ihr Berechtigungsvertrag nicht vollständig freigeschaltet ist, müssen Sie außerdem zuerst einen Gegenkontrakt auf L2 initialisieren und können die Delegation dann nur auf diesen L2-Berechtigungsvertrag übertragen. Die Benutzeroberfläche des Explorers kann Sie durch diesen Prozess leiten, wenn Sie sich über die Vesting Lock Wallet mit dem Explorer verbunden haben.
-### Does my Arbitrum vesting contract allow releasing GRT just like on mainnet?
+### Erlaubt mein Arbitrum-„Vesting“-Vertrag die Freigabe von GRT genau wie im Mainnet?
-No, the vesting contract that is created on Arbitrum will not allow releasing any GRT until the end of the vesting timeline, i.e. until your contract is fully vested. This is to prevent double spending, as otherwise it would be possible to release the same amounts on both layers.
+Nein, der Vesting-Vertrag, der auf Arbitrum erstellt wird, erlaubt keine Freigabe von GRT bis zum Ende des Vesting-Zeitraums, d.h. bis Ihr Vertrag vollständig freigegeben ist. Damit sollen Doppelausgaben verhindert werden, da es sonst möglich wäre, die gleichen Beträge auf beiden Ebenen freizugeben.
-If you'd like to release GRT from the vesting contract, you can transfer them back to the L1 vesting contract using Explorer: in your Arbitrum One profile, you will see a banner saying you can transfer GRT back to the mainnet vesting contract. This requires a transaction on Arbitrum One, waiting 7 days, and a final transaction on mainnet, as it uses the same native bridging mechanism from the GRT bridge.
+Wenn Sie GRT aus dem Vesting-Vertrag freigeben möchten, können Sie sie mit dem Explorer zurück in den L1-Vesting-Vertrag übertragen: In Ihrem Arbitrum One-Profil wird ein Banner angezeigt, das besagt, dass Sie GRT zurück in den Mainnet-Vesting-Vertrag übertragen können. Dies erfordert eine Transaktion auf Arbitrum One, eine Wartezeit von 7 Tagen und eine abschließende Transaktion auf dem Mainnet, da es denselben nativen Überbrückungsmechanismus der GRT- Bridge verwendet.
### Fällt eine Delegationssteuer an?
-Nein. Auf L2 erhaltene Token werden im Namen des angegebenen Delegators an den angegebenen Indexierer delegiert, ohne dass eine Delegationssteuer erhoben wird.
+Nein. Erhaltene Token auf L2 werden im Namen des angegebenen Delegatoren an den angegebenen Indexer delegiert, ohne eine Delegiertensteuer zu erheben.
-### Will my unrealized rewards be transferred when I transfer my delegation?
+### Werden meine nicht realisierten Rewards übertragen, wenn ich meine Delegation übertrage?
-Yes! The only rewards that can't be transferred are the ones for open allocations, as those won't exist until the Indexer closes the allocations (usually every 28 days). If you've been delegating for a while, this is likely only a small fraction of rewards.
+Ja! Die einzigen Rewards, die nicht übertragen werden können, sind die für offene Zuteilungen, da diese nicht mehr existieren, bis der Indexer die Zuteilungen schließt (normalerweise alle 28 Tage). Wenn Sie schon eine Weile delegieren, ist dies wahrscheinlich nur ein kleiner Teil der Rewards.
-At the smart contract level, unrealized rewards are already part of your delegation balance, so they will be transferred when you transfer your delegation to L2.
+Auf der Smart-Contract-Ebene sind nicht realisierte Rewards bereits Teil Ihres Delegationsguthabens, so dass sie übertragen werden, wenn Sie Ihre Delegation auf L2 übertragen.
-### Is moving delegations to L2 mandatory? Is there a deadline?
+### Ist die Verlegung von Delegationen nach L2 obligatorisch? Gibt es eine Frist?
-Moving delegation to L2 is not mandatory, but indexing rewards are increasing on L2 following the timeline described in [GIP-0052](https://forum.thegraph.com/t/gip-0052-timeline-and-requirements-to-increase-rewards-in-l2/4193). Eventually, if the Council keeps approving the increases, all rewards will be distributed in L2 and there will be no indexing rewards for Indexers and Delegators on L1.
+Die Verlagerung der Delegation nach L2 ist nicht zwingend erforderlich, aber die Rewards für die Indexierung steigen auf L2 entsprechend dem in [GIP-0052] (https://forum.thegraph.com/t/gip-0052-timeline-and-requirements-to-increase-rewards-in-l2/4193) beschriebenen Zeitplan. Wenn der Rat die Erhöhungen weiterhin genehmigt, werden schließlich alle Rewards in L2 verteilt und es wird keine Indexierungs-Rewards für Indexer und Delegatoren in L1 geben.
-### If I am delegating to an Indexer that has already transferred stake to L2, do I stop receiving rewards on L1?
+### Wenn ich an einen Indexer delegiere, der bereits Anteile auf L2 übertragen hat, erhalte ich dann keine Rewards mehr auf L1?
-Many Indexers are transferring stake gradually so Indexers on L1 will still be earning rewards and fees on L1, which are then shared with Delegators. Once an Indexer has transferred all of their stake, then they will stop operating on L1, so Delegators will not receive any more rewards unless they transfer to L2.
+Viele Indexer übertragen ihre Anteile nach und nach, so dass Indexer auf L1 immer noch Rewards und Gebühren auf L1 verdienen, die dann mit den Delegatoren geteilt werden. Sobald ein Indexer seinen gesamten Anteil übertragen hat, wird er seine Tätigkeit auf L1 einstellen, so dass die Delegatoren keine Rewards mehr erhalten, es sei denn, sie wechseln zu L2.
-Eventually, if the Council keeps approving the indexing rewards increases in L2, all rewards will be distributed on L2 and there will be no indexing rewards for Indexers and Delegators on L1.
+Wenn das Council die Erhöhungen der Rewards für die Indexierung in L2 weiterhin genehmigt, werden schließlich alle Rewards in L2 verteilt und es wird keine Rewards für Indexer und Delegierte in L1 geben.
-### I don't see a button to transfer my delegation. Why is that?
+### Ich sehe keine Schaltfläche zum Übertragen meiner Delegation. Woran liegt das?
-Your Indexer has probably not used the L2 transfer tools to transfer stake yet.
+Ihr Indexer hat wahrscheinlich noch nicht die L2-Transfer-Tools zur Übertragung von Anteilen verwendet.
-If you can contact the Indexer, you can encourage them to use the L2 Transfer Tools so that Delegators can transfer delegations to their L2 Indexer address.
+Wenn Sie sich mit dem Indexer in Verbindung setzen können, können Sie ihn ermutigen, die L2-Transfer-Tools zu verwenden, damit die Delegatoren Delegationen an ihre L2-Indexer-Adresse übertragen können.
-### My Indexer is also on Arbitrum, but I don't see a button to transfer the delegation in my profile. Why is that?
+### Mein Indexer ist auch auf Arbitrum, aber ich sehe in meinem Profil keine Schaltfläche zum Übertragen der Delegation. Warum ist das so?
-It is possible that the Indexer has set up operations on L2, but hasn't used the L2 transfer tools to transfer stake. The L1 smart contracts will therefore not know about the Indexer's L2 address. If you can contact the Indexer, you can encourage them to use the transfer tool so that Delegators can transfer delegations to their L2 Indexer address.
+Es ist möglich, dass der Indexer Operationen auf L2 eingerichtet hat, aber nicht die L2-Transfer-Tools zur Übertragung von Einsätzen verwendet hat. Die L1-Smart Contracts kennen daher die L2-Adresse des Indexers nicht. Wenn Sie sich mit dem Indexer in Verbindung setzen können, können Sie ihn ermutigen, das Übertragungswerkzeug zu verwenden, damit Delegatoren Delegationen an seine L2-Indexer-Adresse übertragen können.
-### Can I transfer my delegation to L2 if I have started the undelegating process and haven't withdrawn it yet?
+### Kann ich meine Delegation auf L2 übertragen, wenn ich den Prozess der Undelegation eingeleitet und noch nicht zurückgezogen habe?
-No. If your delegation is thawing, you have to wait the 28 days and withdraw it.
+Nein. Wenn Ihre Delegation auftaut, müssen Sie die 28 Tage abwarten und sie zurückziehen.
-The tokens that are being undelegated are "locked" and therefore cannot be transferred to L2.
+Die Token, die nicht delegiert werden, sind „gesperrt“ und können daher nicht auf L2 übertragen werden.
## Kurationssignal
@@ -206,9 +206,9 @@ Um Ihre Kuration zu übertragen, müssen Sie die folgenden Schritte ausführen:
\* Falls erforderlich - d.h. wenn Sie eine Vertragsadresse verwenden.
-### Wie erfahre ich, ob der von mir kuratierte Subgraph nach L2 umgezogen ist?
+### Wie erfahre ich, ob der von mir kuratierte Subgraph nach L2 verschoben wurde?
-Auf der Seite mit den Details der Subgraphen werden Sie durch ein Banner darauf hingewiesen, dass dieser Subgraph übertragen wurde. Sie können der Aufforderung folgen, um Ihre Kuration zu übertragen. Diese Information finden Sie auch auf der Seite mit den Details zu jedem verschobenen Subgraphen.
+Wenn Sie die Detailseite des Subgraphen aufrufen, werden Sie durch ein Banner darauf hingewiesen, dass dieser Subgraph übertragen wurde. Sie können der Aufforderung folgen, um Ihre Kuration zu übertragen. Sie finden diese Information auch auf der Seite mit den Details zu jedem verschobenen Subgraphen.
### Was ist, wenn ich meine Kuration nicht auf L2 verschieben möchte?
@@ -226,7 +226,7 @@ Zurzeit gibt es keine Option für Massenübertragungen.
### Wie übertrage ich meine Anteile auf Arbitrum?
-> Disclaimer: If you are currently unstaking any portion of your GRT on your Indexer, you will not be able to use L2 Transfer Tools.
+> Haftungsausschluss: Wenn Sie derzeit einen Teil Ihres GRT auf Ihrem Indexer entsperren, können Sie die L2 Transfer Tools nicht verwenden.
@@ -238,7 +238,7 @@ Um Ihren Einsatz zu übertragen, müssen Sie die folgenden Schritte ausführen:
3. Bestätigen Sie die Übertragung von Anteilen auf Arbitrum
-\*Note that you must confirm the transfer within 7 days otherwise your stake may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol).
+\*Beachten Sie, dass Sie den Transfer innerhalb von 7 Tagen bestätigen müssen, sonst kann Ihr Einsatz verloren gehen. In den meisten Fällen wird dieser Schritt automatisch ausgeführt, aber eine manuelle Bestätigung kann erforderlich sein, wenn es einen Gaspreisanstieg auf Arbitrum gibt. Sollte es während dieses Prozesses zu Problemen kommen, gibt es Ressourcen, die Ihnen helfen: Kontaktieren Sie den Support unter support@thegraph.com oder auf [Discord] (https://discord.gg/graphprotocol).
### Wird mein gesamter Einsatz übertragen?
@@ -276,13 +276,13 @@ Nein, damit Delegatoren ihre delegierten GRT an Arbitrum übertragen können, mu
Ja! Der Prozess ist ein wenig anders, weil Vesting-Verträge die ETH, die für die Bezahlung des L2-Gases benötigt werden, nicht weiterleiten können, so dass Sie sie vorher einzahlen müssen. Wenn Ihr Freizügigkeitsvertrag nicht vollständig freigeschaltet ist, müssen Sie außerdem zuerst einen Gegenkontrakt auf L2 initialisieren und können den Anteil nur auf diesen L2-Freizügigkeitsvertrag übertragen. Die Benutzeroberfläche des Explorers kann Sie durch diesen Prozess führen, wenn Sie sich mit dem Explorer über die Vesting Lock Wallet verbunden haben.
-### I already have stake on L2. Do I still need to send 100k GRT when I use the transfer tools the first time?
+### Ich habe bereits einen Einsatz auf L2. Muss ich immer noch 100k GRT senden, wenn ich die Transfer-Tools zum ersten Mal benutze?
-Yes. The L1 smart contracts will not be aware of your L2 stake, so they will require you to transfer at least 100k GRT when you transfer for the first time.
+Ja. Die L1-Smart-Contracts kennen Ihren L2-Einsatz nicht und verlangen daher, dass Sie beim ersten Transfer mindestens 100k GRT übertragen.
-### Can I transfer my stake to L2 if I am in the process of unstaking GRT?
+### Kann ich meinen Anteil auf L2 übertragen, wenn ich gerade dabei bin, GRT zu entstaken?
-No. If any fraction of your stake is thawing, you have to wait the 28 days and withdraw it before you can transfer stake. The tokens that are being staked are "locked" and will prevent any transfers or stake to L2.
+Nein. Wenn ein Teil Ihres Einsatzes auftaut, müssen Sie die 28 Tage warten und ihn abheben, bevor Sie den Einsatz übertragen können. Die Token, die eingesetzt werden, sind „gesperrt“ und verhindern jede Übertragung oder Einsatz auf L2.
## Unverfallbare Vertragsübertragung
@@ -377,25 +377,25 @@ Um Ihren Vesting-Vertrag auf L2 zu übertragen, senden Sie ein eventuelles GRT-G
\* Falls erforderlich - d.h. wenn Sie eine Vertragsadresse verwenden.
-\*\*\*\*You must confirm your transaction to complete the balance transfer on Arbitrum. This step must be completed within 7 days or the balance could be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol).
+\*\*\*\*Sie müssen Ihre Transaktion bestätigen, um die Übertragung des Guthabens auf Arbitrum abzuschließen. Dieser Schritt muss innerhalb von 7 Tagen abgeschlossen werden, da sonst das Guthaben verloren gehen kann. In den meisten Fällen wird dieser Schritt automatisch ausgeführt, aber eine manuelle Bestätigung kann erforderlich sein, wenn es auf Arbitrum eine Gaspreisspitze gibt. Sollte es während dieses Prozesses zu Problemen kommen, gibt es Ressourcen, die helfen können: kontaktiere den Support unter support@thegraph.com oder auf [Discord] (https://discord.gg/graphprotocol).
-### My vesting contract shows 0 GRT so I cannot transfer it, why is this and how do I fix it?
+### Mein Vesting-Vertrag zeigt 0 GRT an, so dass ich ihn nicht übertragen kann. Warum ist das so und wie kann ich das ändern?
-To initialize your L2 vesting contract, you need to transfer a nonzero amount of GRT to L2. This is required by the Arbitrum GRT bridge that is used by the L2 Transfer Tools. The GRT must come from the vesting contract's balance, so it does not include staked or delegated GRT.
+Um Ihren L2 Vesting-Vertrag zu initialisieren, müssen Sie einen GRT-Betrag, der nicht Null ist, auf L2 übertragen. Dies ist für die Arbitrum GRT-Brücke erforderlich, die von den L2-Transfer-Tools verwendet wird. Die GRT müssen aus dem Guthaben des Vesting-Vertrags stammen, d. h. sie umfassen keine abgesicherten oder delegierten GRT.
-If you've staked or delegated all your GRT from the vesting contract, you can manually send a small amount like 1 GRT to the vesting contract address from anywhere else (e.g. from another wallet, or an exchange).
+Wenn Sie alle Ihre GRT aus dem Vesting-Vertrag eingesetzt oder delegiert haben, können Sie manuell einen kleinen Betrag wie 1 GRT an die Adresse des Vesting-Vertrags von einem anderen Ort aus senden (z. B. von einer anderen Wallet oder einer Börse).
-### I am using a vesting contract to stake or delegate, but I don't see a button to transfer my stake or delegation to L2, what do I do?
+### Ich verwende einen Vesting-Vertrag, um meinen Anteil oder meine Delegation auf L2 zu übertragen, aber ich sehe keine Taste, um meinen Anteil oder meine Delegation auf L2 zu übertragen.
-If your vesting contract hasn't finished vesting, you need to first create an L2 vesting contract that will receive your stake or delegation on L2. This vesting contract will not allow releasing tokens in L2 until the end of the vesting timeline, but will allow you to transfer GRT back to the L1 vesting contract to be released there.
+Wenn Ihr Vesting-Vertrag noch nicht abgeschlossen ist, müssen Sie zunächst einen L2-Vesting-Vertrag erstellen, der Ihren Anteil oder Ihre Delegation auf L2 erhält. Dieser Vesting-Vertrag erlaubt keine Freigabe von Token in L2 bis zum Ende des Vesting-Zeitraums, aber er erlaubt Ihnen, GRT zurück zum L1-Vesting-Vertrag zu übertragen, um dort freigegeben zu werden.
-When connected with the vesting contract on Explorer, you should see a button to initialize your L2 vesting contract. Follow that process first, and you will then see the buttons to transfer your stake or delegation in your profile.
+Wenn Sie mit dem Vesting-Vertrag im Explorer verbunden sind, sollten Sie eine Schaltfläche zur Initialisierung Ihres L2-Vesting-Vertrags sehen. Befolgen Sie zunächst diesen Prozess, und Sie werden dann die Schaltflächen zur Übertragung Ihres Anteils oder zur Delegation in Ihrem Profil sehen.
-### If I initialize my L2 vesting contract, will this also transfer my delegation to L2 automatically?
+### Wenn ich meinen L2-Vesting-Vertrag initialisiere, wird dann auch meine Delegation automatisch auf L2 übertragen?
-No, initializing your L2 vesting contract is a prerequisite for transferring stake or delegation from the vesting contract, but you still need to transfer these separately.
+Nein, die Initialisierung Ihres L2 Vesting-Vertrags ist eine Voraussetzung für die Übertragung von Anteilen oder Delegationen aus dem Vesting-Vertrag, aber Sie müssen diese trotzdem separat übertragen.
-You will see a banner on your profile prompting you to transfer your stake or delegation after you have initialized your L2 vesting contract.
+Nachdem Sie Ihren L2 Vesting-Vertrag initialisiert haben, erscheint in Ihrem Profil ein Banner, das Sie auffordert, Ihren Anteil oder Ihre Delegation zu übertragen.
### Kann ich meinen Vertrag mit unverfallbarer Anwartschaft zurück nach L1 verschieben?
diff --git a/website/src/pages/de/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/de/archived/arbitrum/l2-transfer-tools-guide.mdx
index 6a5b13da53d7..1be2386aedba 100644
--- a/website/src/pages/de/archived/arbitrum/l2-transfer-tools-guide.mdx
+++ b/website/src/pages/de/archived/arbitrum/l2-transfer-tools-guide.mdx
@@ -1,60 +1,60 @@
---
-title: L2 Transfer Tools Guide
+title: L2 Transfer Tools Anleitung
---
The Graph hat den Wechsel zu L2 auf Arbitrum One leicht gemacht. Für jeden Protokollteilnehmer gibt es eine Reihe von L2-Transfer-Tools, um den Transfer zu L2 für alle Netzwerkteilnehmer nahtlos zu gestalten. Je nachdem, was Sie übertragen möchten, müssen Sie eine bestimmte Anzahl von Schritten befolgen.
Einige häufig gestellte Fragen zu diesen Tools werden in den [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/) beantwortet. Die FAQs enthalten ausführliche Erklärungen zur Verwendung der Tools, zu ihrer Funktionsweise und zu den Dingen, die bei ihrer Verwendung zu beachten sind.
-## So übertragen Sie Ihren Subgraphen auf Arbitrum (L2)
+## How to transfer your Subgraph to Arbitrum (L2)
-## Vorteile der Übertragung Ihrer Untergraphen
+## Benefits of transferring your Subgraphs
The Graph's Community und die Kernentwickler haben im letzten Jahr den Wechsel zu Arbitrum [vorbereitet] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305). Arbitrum, eine Layer-2- oder "L2"-Blockchain, erbt die Sicherheit von Ethereum, bietet aber drastisch niedrigere Gasgebühren.
-Wenn Sie Ihren Subgraphen auf The Graph Network veröffentlichen oder aktualisieren, interagieren Sie mit intelligenten Verträgen auf dem Protokoll, und dies erfordert die Bezahlung von Gas mit ETH. Indem Sie Ihre Subgraphen zu Arbitrum verschieben, werden alle zukünftigen Aktualisierungen Ihres Subgraphen viel niedrigere Gasgebühren erfordern. Die niedrigeren Gebühren und die Tatsache, dass die Kurationsbindungskurven auf L2 flach sind, machen es auch für andere Kuratoren einfacher, auf Ihrem Subgraphen zu kuratieren, was die Belohnungen für Indexer auf Ihrem Subgraphen erhöht. Diese kostengünstigere Umgebung macht es auch für Indexer preiswerter, Ihren Subgraphen zu indizieren und zu bedienen. Die Belohnungen für die Indexierung werden in den kommenden Monaten auf Arbitrum steigen und auf dem Ethereum-Mainnet sinken, so dass immer mehr Indexer ihren Einsatz transferieren und ihre Operationen auf L2 einrichten werden.
+When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2.
-## Verstehen, was mit dem Signal, Ihrem L1-Subgraphen und den Abfrage-URLs geschieht
+## Understanding what happens with signal, your L1 Subgraph and query URLs
-Die Übertragung eines Subgraphen nach Arbitrum verwendet die Arbitrum GRT-Brücke, die wiederum die native Arbitrum-Brücke verwendet, um den Subgraphen nach L2 zu senden. Der "Transfer" löscht den Subgraphen im Mainnet und sendet die Informationen, um den Subgraphen auf L2 mit Hilfe der Brücke neu zu erstellen. Sie enthält auch die vom Eigentümer des Subgraphen signalisierte GRT, die größer als Null sein muss, damit die Brücke die Übertragung akzeptiert.
+Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer.
-Wenn Sie sich für die Übertragung des Untergraphen entscheiden, wird das gesamte Kurationssignal des Untergraphen in GRT umgewandelt. Dies ist gleichbedeutend mit dem "Verwerfen" des Subgraphen im Mainnet. Die GRT, die Ihrer Kuration entsprechen, werden zusammen mit dem Subgraphen an L2 gesendet, wo sie für die Prägung von Signalen in Ihrem Namen verwendet werden.
+When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf.
-Andere Kuratoren können wählen, ob sie ihren Anteil an GRT zurückziehen oder ihn ebenfalls an L2 übertragen, um das Signal auf demselben Untergraphen zu prägen. Wenn ein Subgraph-Eigentümer seinen Subgraph nicht an L2 überträgt und ihn manuell über einen Vertragsaufruf abmeldet, werden die Kuratoren benachrichtigt und können ihre Kuration zurückziehen.
+Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation.
-Sobald der Subgraph übertragen wurde, erhalten die Indexer keine Belohnungen mehr für die Indizierung des Subgraphen, da die gesamte Kuration in GRT umgewandelt wird. Es wird jedoch Indexer geben, die 1) übertragene Untergraphen für 24 Stunden weiter bedienen und 2) sofort mit der Indizierung des Untergraphen auf L2 beginnen. Da diese Indexer den Untergraphen bereits indiziert haben, sollte es nicht nötig sein, auf die Synchronisierung des Untergraphen zu warten, und es wird möglich sein, den L2-Untergraphen fast sofort abzufragen.
+As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately.
-Anfragen an den L2-Subgraphen müssen an eine andere URL gerichtet werden (an `arbitrum-gateway.thegraph.com`), aber die L1-URL wird noch mindestens 48 Stunden lang funktionieren. Danach wird das L1-Gateway (für eine gewisse Zeit) Anfragen an das L2-Gateway weiterleiten, was jedoch zu zusätzlichen Latenzzeiten führt. Es wird daher empfohlen, alle Anfragen so bald wie möglich auf die neue URL umzustellen.
+Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible.
## Ein Teil dieser GRT, der dem Inhaber des Untergraphen entspricht, wird zusammen mit dem Untergraphen an L2 gesendet.
-Als Sie Ihren Subgraphen im Mainnet veröffentlicht haben, haben Sie eine angeschlossene Wallet benutzt, um den Subgraphen zu erstellen, und diese Wallet besitzt die NFT, die diesen Subgraphen repräsentiert und Ihnen erlaubt, Updates zu veröffentlichen.
+When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates.
-Wenn man den Subgraphen zu Arbitrum überträgt, kann man eine andere Wallet wählen, die diesen Subgraphen NFT auf L2 besitzen wird.
+When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2.
Wenn Sie eine "normale" Wallet wie MetaMask verwenden (ein Externally Owned Account oder EOA, d.h. eine Wallet, die kein Smart Contract ist), dann ist dies optional und es wird empfohlen, die gleiche Eigentümeradresse wie in L1 beizubehalten.
-Wenn Sie eine Smart-Contract-Wallet, wie z.B. eine Multisig (z.B. Safe), verwenden, dann ist die Wahl einer anderen L2-Wallet-Adresse zwingend erforderlich, da es sehr wahrscheinlich ist, dass dieses Konto nur im Mainnet existiert und Sie mit dieser Wallet keine Transaktionen auf Arbitrum durchführen können. Wenn Sie weiterhin eine Smart Contract Wallet oder Multisig verwenden möchten, erstellen Sie eine neue Wallet auf Arbitrum und verwenden Sie deren Adresse als L2-Besitzer Ihres Subgraphen.
+If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph.
-**Es ist sehr wichtig, eine Wallet-Adresse zu verwenden, die Sie kontrollieren und die Transaktionen auf Arbitrum durchführen kann. Andernfalls geht der Subgraph verloren und kann nicht wiederhergestellt werden.**
+**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.**
## Vorbereitung der Übertragung: Überbrückung einiger ETH
-Die Übertragung des Subgraphen beinhaltet das Senden einer Transaktion über die Brücke und das Ausführen einer weiteren Transaktion auf Arbitrum. Die erste Transaktion verwendet ETH im Mainnet und enthält einige ETH, um das Gas zu bezahlen, wenn die Nachricht auf L2 empfangen wird. Wenn dieses Gas jedoch nicht ausreicht, müssen Sie die Transaktion wiederholen und das Gas direkt auf L2 bezahlen (dies ist "Schritt 3: Bestätigen des Transfers" unten). Dieser Schritt **muss innerhalb von 7 Tagen nach Beginn der Überweisung** ausgeführt werden. Außerdem wird die zweite Transaktion ("Schritt 4: Beenden der Übertragung auf L2") direkt auf Arbitrum durchgeführt. Aus diesen Gründen benötigen Sie etwas ETH auf einer Arbitrum-Wallet. Wenn Sie ein Multisig- oder Smart-Contract-Konto verwenden, muss sich die ETH in der regulären (EOA-) Wallet befinden, die Sie zum Ausführen der Transaktionen verwenden, nicht in der Multisig-Wallet selbst.
+Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself.
Sie können ETH auf einigen Börsen kaufen und direkt auf Arbitrum abheben, oder Sie können die Arbitrum-Brücke verwenden, um ETH von einer Mainnet-Wallet zu L2 zu senden: [bridge.arbitrum.io](http://bridge.arbitrum.io). Da die Gasgebühren auf Arbitrum niedriger sind, sollten Sie nur eine kleine Menge benötigen. Es wird empfohlen, mit einem niedrigen Schwellenwert (z.B. 0,01 ETH) zu beginnen, damit Ihre Transaktion genehmigt wird.
-## Suche nach dem Untergraphen Transfer Tool
+## Finding the Subgraph Transfer Tool
-Sie finden das L2 Transfer Tool, wenn Sie die Seite Ihres Subgraphen in Subgraph Studio ansehen:
+You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio:

-Sie ist auch im Explorer verfügbar, wenn Sie mit der Wallet verbunden sind, die einen Untergraphen besitzt, und auf der Seite dieses Untergraphen im Explorer:
+It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer:
-
+
Wenn Sie auf die Schaltfläche auf L2 übertragen klicken, wird das Übertragungstool geöffnet, mit dem Sie den Übertragungsvorgang starten können.
@@ -64,15 +64,15 @@ Bevor Sie mit dem Transfer beginnen, müssen Sie entscheiden, welche Adresse den
Bitte beachten Sie auch, dass die Übertragung des Untergraphen ein Signal ungleich Null auf dem Untergraphen mit demselben Konto erfordert, das den Untergraphen besitzt; wenn Sie kein Signal auf dem Untergraphen haben, müssen Sie ein wenig Kuration hinzufügen (das Hinzufügen eines kleinen Betrags wie 1 GRT würde ausreichen).
-Nachdem Sie das Transfer-Tool geöffnet haben, können Sie die L2-Wallet-Adresse in das Feld "Empfänger-Wallet-Adresse" eingeben - **vergewissern Sie sich, dass Sie hier die richtige Adresse eingegeben haben**. Wenn Sie auf "Transfer Subgraph" klicken, werden Sie aufgefordert, die Transaktion auf Ihrer Wallet auszuführen (beachten Sie, dass ein gewisser ETH-Wert enthalten ist, um das L2-Gas zu bezahlen); dadurch wird der Transfer eingeleitet und Ihr L1-Subgraph außer Kraft gesetzt (siehe "Verstehen, was mit Signal, Ihrem L1-Subgraph und Abfrage-URLs passiert" weiter oben für weitere Details darüber, was hinter den Kulissen passiert).
+After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes).
Wenn Sie diesen Schritt ausführen, **vergewissern Sie sich, dass Sie bis zum Abschluss von Schritt 3 in weniger als 7 Tagen fortfahren, sonst gehen der Subgraph und Ihr Signal GRT verloren.** Dies liegt daran, wie L1-L2-Nachrichten auf Arbitrum funktionieren: Nachrichten, die über die Brücke gesendet werden, sind "wiederholbare Tickets", die innerhalb von 7 Tagen ausgeführt werden müssen, und die erste Ausführung muss möglicherweise wiederholt werden, wenn es Spitzen im Gaspreis auf Arbitrum gibt.
-
+
-## Schritt 2: Warten, bis der Untergraph L2 erreicht hat
+## Step 2: Waiting for the Subgraph to get to L2
-Nachdem Sie die Übertragung gestartet haben, muss die Nachricht, die Ihren L1-Subgraphen an L2 sendet, die Arbitrum-Brücke durchlaufen. Dies dauert etwa 20 Minuten (die Brücke wartet darauf, dass der Mainnet-Block, der die Transaktion enthält, vor potenziellen Reorgs der Kette "sicher" ist).
+After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs).
Sobald diese Wartezeit abgelaufen ist, versucht Arbitrum, die Übertragung auf den L2-Verträgen automatisch auszuführen.
@@ -92,74 +92,74 @@ Zu diesem Zeitpunkt wurden Ihr Subgraph und GRT auf Arbitrum empfangen, aber der

-
+
-Dadurch wird der Untergraph veröffentlicht, so dass Indexer, die auf Arbitrum arbeiten, damit beginnen können, ihn zu bedienen. Es wird auch ein Kurationssignal unter Verwendung der GRT, die von L1 übertragen wurden, eingeleitet.
+This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1.
## Schritt 5: Aktualisierung der Abfrage-URL
-Ihr Subgraph wurde erfolgreich zu Arbitrum übertragen! Um den Subgraphen abzufragen, wird die neue URL lauten:
+Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be :
`https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]`
-Beachten Sie, dass die ID des Subgraphen auf Arbitrum eine andere sein wird als die, die Sie im Mainnet hatten, aber Sie können sie immer im Explorer oder Studio finden. Wie oben erwähnt (siehe "Verstehen, was mit Signal, Ihrem L1-Subgraphen und Abfrage-URLs passiert"), wird die alte L1-URL noch eine kurze Zeit lang unterstützt, aber Sie sollten Ihre Abfragen auf die neue Adresse umstellen, sobald der Subgraph auf L2 synchronisiert worden ist.
+Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2.
## Wie Sie Ihre Kuration auf Arbitrum übertragen (L2)
-## Verstehen, was mit der Kuration bei der Übertragung von Untergraphen auf L2 geschieht
+## Understanding what happens to curation on Subgraph transfers to L2
-Wenn der Eigentümer eines Untergraphen einen Untergraphen an Arbitrum überträgt, werden alle Signale des Untergraphen gleichzeitig in GRT konvertiert. Dies gilt für "automatisch migrierte" Signale, d.h. Signale, die nicht spezifisch für eine Subgraphenversion oder einen Einsatz sind, sondern der neuesten Version eines Subgraphen folgen.
+When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph.
-Diese Umwandlung von Signal in GRT entspricht dem, was passieren würde, wenn der Eigentümer des Subgraphen den Subgraphen in L1 verwerfen würde. Wenn der Subgraph veraltet oder übertragen wird, werden alle Kurationssignale gleichzeitig "verbrannt" (unter Verwendung der Kurationsbindungskurve) und das resultierende GRT wird vom GNS-Smart-Contract gehalten (das ist der Vertrag, der Subgraph-Upgrades und automatisch migrierte Signale handhabt). Jeder Kurator auf diesem Subgraphen hat daher einen Anspruch auf dieses GRT proportional zu der Menge an Anteilen, die er für den Subgraphen hatte.
+This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph.
-Ein Teil dieser GRT, der dem Eigentümer des Untergraphen entspricht, wird zusammen mit dem Untergraphen an L2 gesendet.
+A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph.
-Ein Teil dieser GRT, der dem Eigentümer des Untergraphen entspricht, wird zusammen mit dem Untergraphen an L2 gesendet.
+At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it.
## Ein Teil dieser GRT, der dem Inhaber des Untergraphen entspricht, wird zusammen mit dem Untergraphen an L2 gesendet.
Ein Teil dieser GRT, der dem Eigentümer des Untergraphen entspricht, wird zusammen mit dem Untergraphen an L2 gesendet.
-If you're using a "regular" wallet like Metamask (an Externally Owned Account or EOA, i.e. a wallet that is not a smart contract), then this is optional and it is recommended to keep the same Curator address as in L1.
+Wenn Sie eine "normale" Wallet wie MetaMask verwenden (ein Externally Owned Account oder EOA, d.h. eine Wallet, die kein Smart Contract ist), dann ist dies optional und es wird empfohlen, die gleiche Eigentümeradresse wie in L1 beizubehalten.
-If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 receiving wallet address.
+Wenn Sie eine Smart-Contract-Wallet wie eine Multisig (z.B. einen Safe) verwenden, dann ist die Wahl einer anderen L2-Wallet-Adresse zwingend erforderlich, da es sehr wahrscheinlich ist, dass dieses Konto nur im Mainnet existiert und Sie mit dieser Wallet keine Transaktionen auf Arbitrum durchführen können. Wenn Sie weiterhin eine Smart Contract Wallet oder Multisig verwenden möchten, erstellen Sie eine neue Wallet auf Arbitrum und verwenden Sie deren Adresse als L2-Empfangs-Wallet-Adresse.
-**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum, as otherwise the curation will be lost and cannot be recovered.**
+**Es ist äußerst wichtig, eine Wallet-Adresse zu verwenden, die Sie kontrollieren und mit der Sie Transaktionen auf Arbitrum durchführen können, da sonst die Kuration verloren geht und nicht wiederhergestellt werden kann.**
-## Sending curation to L2: Step 1
+## Senden der Kuration an L2: Schritt 1
-Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough.
+Bevor Sie mit dem Transfer beginnen, müssen Sie entscheiden, welche Adresse die Kuration auf L2 besitzen wird (siehe „Auswahl Ihrer L2 Wallet“ oben), und es wird empfohlen, einige ETH für Gas bereits auf Arbitrum überbrückt zu haben, falls Sie die Ausführung der Nachricht auf L2 wiederholen müssen. Sie können ETH auf einigen Börsen kaufen und sie direkt auf Arbitrum abheben, oder Sie können die Arbitrum- Bridge benutzen, um ETH von einer Mainnet Wallet zu L2 zu senden: [bridge.arbitrum.io](http://bridge.arbitrum.io) - da die Gasgebühren auf Arbitrum so niedrig sind, sollten Sie nur eine kleine Menge benötigen, z.B. 0,01 ETH wird wahrscheinlich mehr als genug sein.
-If a subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred subgraph.
+Wenn ein Subgraph, den Sie kuratieren, auf L2 übertragen wurde, wird im Explorer eine Meldung angezeigt, dass Sie einen übertragenen Subgraph kuratieren.
-When looking at the subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool.
+Auf der Subgraph-Seite können Sie wählen, ob Sie die Kuration zurückziehen oder übertragen wollen. Ein Klick auf „Signal nach Arbitrum übertragen“ öffnet das Übertragungstool.

-After opening the Transfer Tool, you may be prompted to add some ETH to your wallet if you don't have any. Then you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Signal will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer.
+Nachdem Sie das Transfer-Tool geöffnet haben, werden Sie möglicherweise aufgefordert, Ihrer Wallet ETH hinzuzufügen, falls Sie keine haben. Dann können Sie die Adresse der L2-Wallet in das Feld „Receiving wallet address“ (Adresse der empfangenden Wallet) eingeben - **vergewissern Sie sich, dass Sie hier die richtige Adresse eingegeben haben**. Wenn Sie auf „Transfer Signal“ klicken, werden Sie aufgefordert, die Transaktion auf Ihrer Wallet auszuführen (beachten Sie, dass ein gewisser ETH-Wert enthalten ist, um das L2-Gas zu bezahlen); dadurch wird der Transfer eingeleitet.
-If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retryable tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum.
+Wenn Sie diesen Schritt ausführen, **sichern Sie sich, dass Sie bis zum Abschluss von Schritt 3 in weniger als 7 Tagen fortfahren, sonst geht Ihr Signal GRT verloren.** Das liegt daran, wie der L1-L2-Nachrichtenaustausch auf Arbitrum funktioniert: Nachrichten, die über die Bridge gesendet werden, sind „wiederholbare Tickets“, die innerhalb von 7 Tagen ausgeführt werden müssen, und die anfängliche Ausführung muss möglicherweise wiederholt werden, wenn es Spitzen im Gaspreis auf Arbitrum gibt.
-## Sending curation to L2: step 2
+## Senden der Kuration an L2: Schritt 2
-Starting the transfer:
+Starten Sie den Transfer:

-After you start the transfer, the message that sends your L1 curation to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs).
+Nachdem Sie die Übertragung gestartet haben, muss die Nachricht, die Ihre L1-Kuration an L2 sendet, die Arbitrum- Bridge durchlaufen. Dies dauert etwa 20 Minuten (die Bridge wartet darauf, dass der Mainnet-Block, der die Transaktion enthält, vor potenziellen Chain Reorgs „sicher“ ist).
Sobald diese Wartezeit abgelaufen ist, versucht Arbitrum, die Übertragung auf den L2-Verträgen automatisch auszuführen.

-## Sending curation to L2: step 3
+## Senden der Kuration an L2: Schritt 3
-In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the curation on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your curation to L2 will be pending and require a retry within 7 days.
+In den meisten Fällen wird dieser Schritt automatisch ausgeführt, da das in Schritt 1 enthaltene L2-Gas ausreichen sollte, um die Transaktion auszuführen, die die Kuration auf den Arbitrum-Verträgen erhält. In einigen Fällen ist es jedoch möglich, dass ein Anstieg der Gaspreise auf Arbitrum dazu führt, dass diese automatische Ausführung fehlschlägt. In diesem Fall wird das „Ticket“, das Ihre Kuration an L2 sendet, ausstehend sein und einen erneuten Versuch innerhalb von 7 Tagen erfordern.
Wenn dies der Fall ist, müssen Sie sich mit einer L2-Wallet verbinden, die etwas ETH auf Arbitrum hat, Ihr Wallet-Netzwerk auf Arbitrum umstellen und auf "Confirm Transfer" klicken, um die Transaktion zu wiederholen.

-## Withdrawing your curation on L1
+## Zurückziehen Ihrer Kuration auf L1
-If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address.
+Wenn Sie es vorziehen, Ihre GRT nicht an L2 zu senden, oder wenn Sie die GRT lieber manuell überbrücken möchten, können Sie Ihre kuratierten GRT auf L1 abheben. Wählen Sie auf dem Banner auf der Subgraph-Seite „Signal zurückziehen“ und bestätigen Sie die Transaktion; die GRT werden an Ihre Kurator-Adresse gesendet.
diff --git a/website/src/pages/de/archived/sunrise.mdx b/website/src/pages/de/archived/sunrise.mdx
index 398fe1ca72f7..5b521b176ffc 100644
--- a/website/src/pages/de/archived/sunrise.mdx
+++ b/website/src/pages/de/archived/sunrise.mdx
@@ -1,13 +1,13 @@
---
title: Post-Sunrise + Upgrade auf The Graph Network FAQ
-sidebarTitle: Post-Sunrise Upgrade FAQ
+sidebarTitle: FAQ zum Post-Sunrise-Upgrade
---
> Hinweis: Die Sunrise der dezentralisierten Daten endete am 12. Juni 2024.
## Was war die Sunrise der dezentralisierten Daten?
-Die Sunrise of Decentralized Data war eine Initiative, die von Edge & Node angeführt wurde. Diese Initiative ermöglichte es Subgraph-Entwicklern, nahtlos auf das dezentrale Netzwerk von The Graph zu wechseln.
+Die Sunrise of dezentralisierten Data war eine Initiative, die von Edge & Node angeführt wurde. Diese Initiative ermöglichte es Subgraph-Entwicklern, nahtlos auf das dezentrale Netzwerk von The Graph zu wechseln.
Dieser Plan stützt sich auf frühere Entwicklungen des Graph-Ökosystems, einschließlich eines aktualisierten Indexers, der Abfragen auf neu veröffentlichte Subgraphen ermöglicht.
diff --git a/website/src/pages/de/contracts.json b/website/src/pages/de/contracts.json
index b33760446ae8..6b94c57a82a5 100644
--- a/website/src/pages/de/contracts.json
+++ b/website/src/pages/de/contracts.json
@@ -1,4 +1,4 @@
{
- "contract": "Contract",
+ "contract": "Vertrag",
"address": "Adress"
}
diff --git a/website/src/pages/de/global.json b/website/src/pages/de/global.json
index 424bff2965bc..99f5545ec43c 100644
--- a/website/src/pages/de/global.json
+++ b/website/src/pages/de/global.json
@@ -1,35 +1,78 @@
{
"navigation": {
"title": "Hauptmenü",
- "show": "Show navigation",
- "hide": "Hide navigation",
- "subgraphs": "Subgraphs",
+ "show": "Navigation anzeigen",
+ "hide": "Navigation ausblenden",
+ "subgraphs": "Subgraphen",
"substreams": "Substreams",
- "sps": "Substreams-Powered Subgraphs",
- "indexing": "Indexing",
+ "sps": "Substreams-getriebene Subgraphen",
+ "tokenApi": "Token API",
+ "indexing": "Indizierung",
"resources": "Ressourcen",
- "archived": "Archived"
+ "archived": "Archiviert"
},
"page": {
- "lastUpdated": "Last updated",
+ "lastUpdated": "Zuletzt aktualisiert",
"readingTime": {
- "title": "Reading time",
- "minutes": "minutes"
+ "title": "Lesedauer",
+ "minutes": "Minuten"
},
- "previous": "Previous page",
- "next": "Next page",
- "edit": "Edit on GitHub",
- "onThisPage": "On this page",
- "tableOfContents": "Table of contents",
- "linkToThisSection": "Link to this section"
+ "previous": "Vorherige Seite",
+ "next": "Nächste Seite",
+ "edit": "Auf GitHub bearbeiten",
+ "onThisPage": "Auf dieser Seite",
+ "tableOfContents": "Inhaltsübersicht",
+ "linkToThisSection": "Link zu diesem Abschnitt"
},
"content": {
- "note": "Note",
+ "callout": {
+ "note": "Note",
+ "tip": "Tip",
+ "important": "Important",
+ "warning": "Warning",
+ "caution": "Caution"
+ },
"video": "Video"
},
+ "openApi": {
+ "parameters": {
+ "pathParameters": "Path Parameters",
+ "queryParameters": "Abfrage-Parameter",
+ "headerParameters": "Header Parameters",
+ "cookieParameters": "Cookie Parameters",
+ "parameter": "Parameter",
+ "description": "Beschreibung",
+ "value": "Value",
+ "required": "Required",
+ "deprecated": "Deprecated",
+ "defaultValue": "Default value",
+ "minimumValue": "Minimum value",
+ "maximumValue": "Maximum value",
+ "acceptedValues": "Accepted values",
+ "acceptedPattern": "Accepted pattern",
+ "format": "Format",
+ "serializationFormat": "Serialization format"
+ },
+ "request": {
+ "label": "Test this endpoint",
+ "noCredentialsRequired": "No credentials required",
+ "send": "Send Request"
+ },
+ "responses": {
+ "potentialResponses": "Potential Responses",
+ "status": "Status",
+ "description": "Beschreibung",
+ "liveResponse": "Live Response",
+ "example": "Beispiel"
+ },
+ "errors": {
+ "invalidApi": "Could not retrieve API {0}.",
+ "invalidOperation": "Could not retrieve operation {0} in API {1}."
+ }
+ },
"notFound": {
- "title": "Oops! This page was lost in space...",
- "subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.",
- "back": "Go Home"
+ "title": "Ups! Diese Seite ist im Space verloren gegangen...",
+ "subtitle": "Überprüfen Sie, ob Sie die richtige Adresse verwenden, oder besuchen Sie unsere Website, indem Sie auf den unten stehenden Link klicken.",
+ "back": "Zurück zur Startseite"
}
}
diff --git a/website/src/pages/de/index.json b/website/src/pages/de/index.json
index fccfa5cf2a6c..b56ea56c5897 100644
--- a/website/src/pages/de/index.json
+++ b/website/src/pages/de/index.json
@@ -2,41 +2,41 @@
"title": "Home",
"hero": {
"title": "The Graph Docs",
- "description": "Kick-start your web3 project with the tools to extract, transform, and load blockchain data.",
- "cta1": "How The Graph works",
+ "description": "Starten Sie Ihr Web3-Projekt mit den Tools zum Extrahieren, Transformieren und Laden von Blockchain-Daten.",
+ "cta1": "Funktionsweise von The Graph",
"cta2": "Erstellen Sie Ihren ersten Subgraphen"
},
"products": {
- "title": "The Graph’s Products",
- "description": "Choose a solution that fits your needs—interact with blockchain data your way.",
+ "title": "The Graph's Products",
+ "description": "Wählen Sie eine Lösung, die Ihren Anforderungen entspricht, und interagieren Sie auf Ihre Weise mit Blockchain-Daten.",
"subgraphs": {
"title": "Subgraphs",
- "description": "Extract, process, and query blockchain data with open APIs.",
- "cta": "Develop a subgraph"
+ "description": "Extrahieren, Verarbeiten und Abfragen von Blockchain-Daten mit offenen APIs.",
+ "cta": "Entwickeln Sie einen Subgraphen"
},
"substreams": {
"title": "Substreams",
- "description": "Fetch and consume blockchain data with parallel execution.",
- "cta": "Develop with Substreams"
+ "description": "Abrufen und Konsumieren von Blockchain-Daten mit paralleler Ausführung.",
+ "cta": "Entwickeln mit Substreams"
},
"sps": {
- "title": "Substreams-Powered Subgraphs",
- "description": "Boost your subgraph’s efficiency and scalability by using Substreams.",
- "cta": "Set up a Substreams-powered subgraph"
+ "title": "Substreams-getriebene Subgraphen",
+ "description": "Boost your subgraph's efficiency and scalability by using Substreams.",
+ "cta": "Einrichten eines Substreams-powered Subgraphen"
},
"graphNode": {
- "title": "Graph Node",
- "description": "Index blockchain data and serve it via GraphQL queries.",
- "cta": "Set up a local Graph Node"
+ "title": "Graph-Knoten",
+ "description": "Indexieren Sie Blockchain-Daten und stellen Sie sie über GraphQL-Abfragen bereit.",
+ "cta": "Lokalen Graph-Knoten einrichten"
},
"firehose": {
"title": "Firehose",
- "description": "Extract blockchain data into flat files to enhance sync times and streaming capabilities.",
- "cta": "Get started with Firehose"
+ "description": "Extrahieren Sie Blockchain-Daten in flache Dateien, um die Synchronisierungszeiten und Streaming-Funktionen zu verbessern.",
+ "cta": "Erste Schritte mit Firehose"
}
},
"supportedNetworks": {
- "title": "Supported Networks",
+ "title": "Unterstützte Netzwerke",
"details": "Network Details",
"services": "Services",
"type": "Type",
@@ -44,7 +44,7 @@
"identifier": "Identifier",
"chainId": "Chain ID",
"nativeCurrency": "Native Currency",
- "docs": "Docs",
+ "docs": "Dokumente",
"shortName": "Short Name",
"guides": "Guides",
"search": "Search networks",
@@ -54,9 +54,9 @@
"infoText": "Boost your developer experience by enabling The Graph's indexing network.",
"infoLink": "Integrate new network",
"description": {
- "base": "The Graph supports {0}. To add a new network, {1}",
- "networks": "networks",
- "completeThisForm": "complete this form"
+ "base": "The Graph unterstützt {0}. Um ein neues Netzwerk hinzuzufügen, {1}",
+ "networks": "Netzwerke",
+ "completeThisForm": "füllen Sie dieses Formular aus"
},
"emptySearch": {
"title": "No networks found",
@@ -92,7 +92,7 @@
"description": "Leverage features like custom data sources, event handlers, and topic filters."
},
"billing": {
- "title": "Billing",
+ "title": "Abrechnung",
"description": "Optimize costs and manage billing efficiently."
}
},
@@ -123,53 +123,53 @@
"title": "Guides",
"description": "",
"explorer": {
- "title": "Find Data in Graph Explorer",
- "description": "Leverage hundreds of public subgraphs for existing blockchain data."
+ "title": "find Data in Graph Explorer",
+ "description": "Nutzen Sie Hunderte von öffentlichen Subgraphen für bestehende Blockchain-Daten."
},
"publishASubgraph": {
- "title": "Publish a Subgraph",
- "description": "Add your subgraph to the decentralized network."
+ "title": "Veröffentlichen eines Subgraphen",
+ "description": "Fügen Sie Ihren Subgraphen dem dezentralen Netzwerk hinzu."
},
"publishSubstreams": {
- "title": "Publish Substreams",
- "description": "Launch your Substreams package to the Substreams Registry."
+ "title": "Substreams veröffentlichen",
+ "description": "Starten Sie Ihr Substrats-Paket in der Substrats-Registrierung."
},
"queryingBestPractices": {
- "title": "Querying Best Practices",
- "description": "Optimize your subgraph queries for faster, better results."
+ "title": "Best Practices für Abfragen",
+ "description": "Optimieren Sie Ihre Subgraphenabfragen für schnellere und bessere Ergebnisse."
},
"timeseries": {
- "title": "Optimized Timeseries & Aggregations",
- "description": "Streamline your subgraph for efficiency."
+ "title": "Optimierte Zeitreihen & Aggregationen",
+ "description": "Optimieren Sie Ihren Subgraphen für mehr Effizienz."
},
"apiKeyManagement": {
- "title": "API Key Management",
- "description": "Easily create, manage, and secure API keys for your subgraphs."
+ "title": "API-Schlüssel-Management",
+ "description": "Einfaches Erstellen, Verwalten und Sichern von API-Schlüsseln für Ihre Subgraphen."
},
"transferToTheGraph": {
- "title": "Transfer to The Graph",
- "description": "Seamlessly upgrade your subgraph from any platform."
+ "title": "Übertragung auf The Graph",
+ "description": "Aktualisieren Sie Ihren Subgraph nahtlos von jeder Plattform aus."
}
},
"videos": {
"title": "Video Tutorials",
- "watchOnYouTube": "Watch on YouTube",
+ "watchOnYouTube": "Auf YouTube ansehen",
"theGraphExplained": {
"title": "The Graph Explained In 1 Minute",
- "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video."
+ "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video."
},
"whatIsDelegating": {
"title": "Was ist Delegieren?",
- "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating."
+ "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph."
},
"howToIndexSolana": {
- "title": "How to Index Solana with a Substreams-powered Subgraph",
- "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph."
+ "title": "Indizierung von Solana mit einem Substreams-powered Subgraph",
+ "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases."
}
},
"time": {
- "reading": "Reading time",
- "duration": "Duration",
+ "reading": "Lesedauer",
+ "duration": "Laufzeit",
"minutes": "min"
}
}
diff --git a/website/src/pages/de/indexing/_meta-titles.json b/website/src/pages/de/indexing/_meta-titles.json
index 42f4de188fd4..ccfae2db5e84 100644
--- a/website/src/pages/de/indexing/_meta-titles.json
+++ b/website/src/pages/de/indexing/_meta-titles.json
@@ -1,3 +1,3 @@
{
- "tooling": "Indexer Tooling"
+ "tooling": "Indexierer- Tools"
}
diff --git a/website/src/pages/de/indexing/new-chain-integration.mdx b/website/src/pages/de/indexing/new-chain-integration.mdx
index 54d9b95d5a24..eed49796a99f 100644
--- a/website/src/pages/de/indexing/new-chain-integration.mdx
+++ b/website/src/pages/de/indexing/new-chain-integration.mdx
@@ -2,7 +2,7 @@
title: Integration neuer Ketten
---
-Ketten können die Unterstützung von Subgraphen in ihr Ökosystem einbringen, indem sie eine neue `graph-node` Integration starten. Subgraphen sind ein leistungsfähiges Indizierungswerkzeug, das Entwicklern eine Welt voller Möglichkeiten eröffnet. Graph Node indiziert bereits Daten von den hier aufgeführten Ketten. Wenn Sie an einer neuen Integration interessiert sind, gibt es 2 Integrationsstrategien:
+Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies:
1. **EVM JSON-RPC**
2. **Firehose**: Alle Firehose-Integrationslösungen umfassen Substreams, eine groß angelegte Streaming-Engine auf der Grundlage von Firehose mit nativer `graph-node`-Unterstützung, die parallelisierte Transformationen ermöglicht.
@@ -51,7 +51,7 @@ Während JSON-RPC und Firehose beide für Subgraphen geeignet sind, ist für Ent
- All diese `getLogs`-Aufrufe und Roundtrips werden durch einen einzigen Stream ersetzt, der im Herzen von `graph-node` ankommt; ein einziges Blockmodell für alle Subgraphen, die es verarbeitet.
-> HINWEIS: Bei einer Firehose-basierten Integration für EVM-Ketten müssen Indexer weiterhin den Archiv-RPC-Knoten der Kette ausführen, um Subgraphen ordnungsgemäß zu indizieren. Dies liegt daran, dass der Firehose nicht in der Lage ist, den Smart-Contract-Status bereitzustellen, der normalerweise über die RPC-Methode „eth_call“ zugänglich ist. (Es ist erwähnenswert, dass `eth_calls` keine gute Praxis für Entwickler sind)
+> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers)
## Graph-Node Konfiguration
diff --git a/website/src/pages/de/indexing/overview.mdx b/website/src/pages/de/indexing/overview.mdx
index 05530cbff93a..4635fbb7f2b9 100644
--- a/website/src/pages/de/indexing/overview.mdx
+++ b/website/src/pages/de/indexing/overview.mdx
@@ -5,43 +5,43 @@ sidebarTitle: Überblick
Indexer sind Knotenbetreiber im Graph Network, die Graph Tokens (GRT) einsetzen, um Indizierungs- und Abfrageverarbeitungsdienste anzubieten. Indexer verdienen Abfragegebühren und Indexing Rewards für ihre Dienste. Sie verdienen auch Abfragegebühren, die gemäß einer exponentiellen Rabattfunktion zurückerstattet werden.
-GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers also earn rewards for delegated stake from Delegators, to contribute to the network.
+Die im Protokoll eingesetzte GRT unterliegt einer Nachfrist und kann reduziert werden, wenn Indexierer böswillig sind und Anwendungen falsche Daten präsentieren oder wenn sie falsch indizieren. Indexer erhalten auch Belohnungen für den Einsatz, den Delegatoren für ihren Beitrag zum Netzwerk geben.
-Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing.
+Die Indexierer wählen die zu indexierenden Subgraphen auf der Grundlage des Kurationssignals des Subgraphen aus, wobei die Kuratoren GRT einsetzen, um anzugeben, welche Subgraphen von hoher Qualität sind und priorisiert werden sollten. Verbraucher (z. B. Anwendungen) können auch Parameter dafür festlegen, welche Indexierer Abfragen für ihre Teilgraphen verarbeiten, und Präferenzen für die Preisgestaltung für Abfragen festlegen.
## FAQ
-### What is the minimum stake required to be an Indexer on the network?
+### Wie hoch ist der Mindesteinsatz, der erforderlich ist, um ein Indexierer im Netzwerk zu sein?
-The minimum stake for an Indexer is currently set to 100K GRT.
+Der Mindesteinsatz für einen Indexer ist derzeit auf 100.000 GRT festgelegt.
-### What are the revenue streams for an Indexer?
+### Welche Einnahmequellen gibt es für einen Indexierer?
-**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity.
+**Query fee rebates** - Zahlungen für die Bedienung von Abfragen im Netz. Diese Zahlungen werden über Statuskanäle zwischen einem Indexer und einem Gateway vermittelt. Jede Abfrageanfrage eines Gateways enthält eine Zahlung und die entsprechende Antwort einen Nachweis für die Gültigkeit des Abfrageergebnisses.
-**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network.
+**Indexierungsbelohnungen** - Die Indexierungsbelohnungen werden über eine jährliche protokollweite Inflation von 3% an Indexer verteilt, die Subgraph-Einsätze für das Netzwerk indexieren.
-### How are indexing rewards distributed?
+### Wie werden die Indexierungsprämien verteilt?
-Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.**
+Indexierungsbelohnungen stammen aus der Protokollinflation, die auf 3 % pro Jahr festgelegt ist. Sie werden auf der Grundlage des Anteils aller Kurationssignale auf jedem Subgraphen verteilt und dann anteilig an die Indexierer auf der Grundlage ihres zugewiesenen Anteils an diesem Subgraphen verteilt. \*\*Eine Zuteilung muss mit einem gültigen Indizierungsnachweis (POI) abgeschlossen werden, der die in der Schlichtungscharta festgelegten Standards erfüllt, um für Belohnungen in Frage zu kommen.
-Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack.
+Die Community hat zahlreiche Tools zur Berechnung von Rewards erstellt, die in der [Community-Guides-Sammlung](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c) zusammengefasst sind. Eine aktuelle Liste von Tools finden Sie auch in den Channels #Delegators und #Indexers auf dem [Discord-Server](https://discord.gg/graphprotocol). Hier verlinken wir einen [empfohlenen Allokationsoptimierer](https://github.com/graphprotocol/allocation-optimizer), der in den Indexer-Software-Stack integriert ist.
-### What is a proof of indexing (POI)?
+### Was ist ein Indizierungsnachweis (proof of indexing - POI)?
-POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block.
+POIs werden im Netzwerk verwendet, um zu überprüfen, ob ein Indexierer die von ihm zugewiesenen Subgraphen indexiert. Ein POI für den ersten Block der aktuellen Epoche muss beim Schließen einer Zuweisung eingereicht werden, damit diese Zuweisung für die Indexierung belohnt werden kann. Ein POI für einen Block ist eine Zusammenfassung aller Entity-Store-Transaktionen für einen bestimmten Subgraph-Einsatz bis zu diesem Block und einschließlich.
-### When are indexing rewards distributed?
+### Wann werden Indizierungsprämien verteilt?
-Allocations are continuously accruing rewards while they're active and allocated within 28 epochs. Rewards are collected by the Indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the Indexer wants to force close them, or after 28 epochs a Delegator can close the allocation for the Indexer, but this results in no rewards. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h).
+Zuteilungen sind kontinuierlich anfallende Belohnungen, während sie aktiv sind und innerhalb von 28 Epochen zugeteilt werden. Belohnungen werden von den Indexierern gesammelt und verteilt, sobald ihre Zuteilungen geschlossen sind. Das geschieht entweder manuell, wenn der Indexierer das Schließen erzwingen möchte, oder nach 28 Epochen kann ein Delegator die Zuordnung für den Indexer schließen, aber dies führt zu keinen Belohnungen. 28 Epochen ist die maximale Zuweisungslebensdauer (im Moment dauert eine Epoche etwa 24 Stunden).
-### Can pending indexing rewards be monitored?
+### Können ausstehende Indizierungsprämien überwacht werden?
-The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) function that can be used to check the pending rewards for a specific allocation.
+Der RewardsManager-Vertrag verfügt über eine schreibgeschützte Funktion [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316), mit der die ausstehenden Rewards für eine bestimmte Zuweisung überprüft werden können.
-Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps:
+Viele der von der Community erstellten Dashboards enthalten ausstehende Prämienwerte und können einfach manuell überprüft werden, indem Sie diesen Schritten folgen:
-1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations:
+1. Abfrage des [mainnet Subgraphen] (https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one), um die IDs für alle aktiven Zuweisungen zu erhalten:
```graphql
query indexerAllocations {
@@ -57,138 +57,138 @@ query indexerAllocations {
}
```
-Use Etherscan to call `getRewards()`:
+Verwenden Sie Etherscan, um `getRewards()` aufzurufen:
-- Navigate to [Etherscan interface to Rewards contract](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract)
-- To call `getRewards()`:
- - Expand the **9. getRewards** dropdown.
- - Enter the **allocationID** in the input.
- - Click the **Query** button.
+- Navigieren Sie zu [Etherscan-Schnittstelle zu Rewards-Vertrag](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract)
+- Zum Aufrufen von `getRewards()`:
+ - Erweitern Sie das Dropdown-Menü **9. getRewards**.
+ - Geben Sie die **allocationID** in die Eingabe ein.
+ - Klicken Sie auf die Schaltfläche **Abfrage**.
-### What are disputes and where can I view them?
+### Was sind Streitfälle und wo kann ich sie einsehen?
-Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes.
+Sowohl die Abfragen als auch die Zuordnungen des Indexierers können während des Streitzeitraums auf The Graph angefochten werden. Die Streitdauer variiert je nach Streitfall. Abfragen/Bescheinigungen haben ein 7-Epochen-Streitfenster, während Zuweisungen 56 Epochen haben. Nach Ablauf dieser Fristen können weder Zuweisungen noch Rückfragen angefochten werden. Wenn eine Streitigkeit eröffnet wird, wird von den Fischern eine Kaution von mindestens 10.000 GRT verlangt, die gesperrt wird, bis die Streitigkeit abgeschlossen ist und eine Lösung gefunden wurde. Fischer sind alle Netzwerkteilnehmer, die Streitigkeiten eröffnen.
-Disputes have **three** possible outcomes, so does the deposit of the Fishermen.
+Bei Streitigkeiten gibt es **drei** mögliche Ergebnisse, so auch bei der Kaution der Fischer.
-- If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed.
-- If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed.
-- If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT.
+- Wird die Anfechtung zurückgewiesen, werden die von den Fischern hinterlegten GRT verbrannt, und der angefochtene Indexierer wird nicht gekürzt.
+- Wird der Streitfall durch ein Unentschieden entschieden, wird die Kaution des Fischers zurückerstattet und der strittige Indexierer wird nicht gekürzt.
+- Wird dem Einspruch stattgegeben, werden die von den Fischern eingezahlten GRT zurückerstattet, der strittige Indexer wird gekürzt und die Fischer erhalten 50 % der gekürzten GRT.
-Disputes can be viewed in the UI in an Indexer's profile page under the `Disputes` tab.
+Streitfälle können in der Benutzeroberfläche auf der Profilseite eines Indexierers unter der Registerkarte `Disputes` angezeigt werden.
-### What are query fee rebates and when are they distributed?
+### Was sind Rückerstattungen von Abfragegebühren und wann werden sie ausgeschüttet?
-Query fees are collected by the gateway and distributed to indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). The exponential rebate function is proposed as a way to ensure indexers achieve the best outcome by faithfully serving queries. It works by incentivizing Indexers to allocate a large amount of stake (which can be slashed for erring when serving a query) relative to the amount of query fees they may collect.
+Die Abfragegebühren werden vom Gateway eingezogen und gemäß der exponentiellen Rabattfunktion an die Indexierer verteilt (siehe GIP [hier](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)). Die exponentielle Rabattfunktion wird vorgeschlagen, um sicherzustellen, dass die Indexierer das beste Ergebnis erzielen, indem sie die Abfragen treu bedienen. Sie bietet den Indexierern einen Anreiz, einen hohen Einsatz (der bei Fehlern bei der Bedienung einer Anfrage gekürzt werden kann) im Verhältnis zur Höhe der Abfragegebühren, die sie einnehmen können, zu leisten.
-Once an allocation has been closed the rebates are available to be claimed by the Indexer. Upon claiming, the query fee rebates are distributed to the Indexer and their Delegators based on the query fee cut and the exponential rebate function.
+Sobald eine Zuteilung abgeschlossen ist, können die Rabatte vom Indexierer beansprucht werden. Nach der Beantragung werden die Abfragegebührenrabatte auf der Grundlage der Abfragegebührenkürzung und der exponentiellen Rabattfunktion an den Indexer und seine Delegatoren verteilt.
-### What is query fee cut and indexing reward cut?
+### Was ist die Kürzung der Abfragegebühr und die Kürzung der Indizierungsprämie?
-The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the Indexer and their Delegators. See the last steps in [Staking in the Protocol](/indexing/overview/#stake-in-the-protocol) for instructions on setting the delegation parameters.
+Die Werte `queryFeeCut` und `indexingRewardCut` sind Delegationsparameter, die der Indexer zusammen mit cooldownBlocks setzen kann, um die Verteilung von GRT zwischen dem Indexer und seinen Delegatoren zu kontrollieren. Siehe die letzten Schritte in [Staking im Protokoll](/indexing/overview/#stake-in-the-protocol) für Anweisungen zur Einstellung der Delegationsparameter.
-- **queryFeeCut** - the % of query fee rebates that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the query fees earned when an allocation is closed with the other 5% going to the Delegators.
+- **queryFeeCut** - der Prozentsatz der Rückerstattungen von Abfragegebühren, der an den Indexer verteilt wird. Wenn dieser Wert auf 95 % gesetzt ist, erhält der Indexer 95 % der Abfragegebühren, die beim Abschluss einer Zuteilung anfallen, während die restlichen 5 % an die Delegatoren gehen.
-- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%.
+- **indexingRewardCut** - der Prozentsatz der Indizierung Rewards, der an den Indexer verteilt wird. Wenn dieser Wert auf 95 % gesetzt ist, erhält der Indexierer 95 % der Rewards für die Indizierung, wenn eine Zuweisung abgeschlossen wird, und die Delegatoren teilen sich die restlichen 5 %.
-### How do Indexers know which subgraphs to index?
+### Woher wissen die Indexierer, welche Subgraphen indexiert werden sollen?
-Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network:
+Indexierer können sich durch die Anwendung fortgeschrittener Techniken für die Indizierung von Subgraphen unterscheiden, aber um eine allgemeine Vorstellung zu vermitteln, werden wir einige Schlüsselmetriken diskutieren, die zur Bewertung von Subgraphen im Netzwerk verwendet werden:
-- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up.
+- **Kurationssignal** - Der Anteil des Netzwerkkurationssignals, der auf einen bestimmten Subgraphen angewandt wird, ist ein guter Indikator für das Interesse an diesem Subgraphen, insbesondere während der Bootstrap-Phase, wenn das Abfragevolumen ansteigt.
-- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand.
+- **Eingezogene Abfragegebühren** - Die historischen Daten zum Volumen der für einen bestimmten Subgraphen eingezogenen Abfragegebühren sind ein guter Indikator für die zukünftige Nachfrage.
-- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply.
+- **Einsatzhöhe** - Die Beobachtung des Verhaltens anderer Indexierer oder die Betrachtung des Anteils am Gesamteinsatz, der bestimmten Subgraphen zugewiesen wird, kann es einem Indexierer ermöglichen, die Angebotsseite für Subgraphenabfragen zu überwachen, um Subgraphen zu identifizieren, in die das Netzwerk Vertrauen zeigt, oder Subgraphen, die möglicherweise einen Bedarf an mehr Angebot aufweisen.
-- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards.
+- **Subgraphen ohne Indizierungsbelohnungen** - Einige Subgraphen erzeugen keine Indizierungsbelohnungen, hauptsächlich weil sie nicht unterstützte Funktionen wie IPFS verwenden oder weil sie ein anderes Netzwerk außerhalb des Hauptnetzes abfragen. Wenn ein Subgraph keine Indizierungsbelohnungen erzeugt, wird eine entsprechende Meldung angezeigt.
-### What are the hardware requirements?
+### Welche Hardware-Anforderungen gibt es?
-- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded.
-- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests.
-- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second.
-- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic.
+- **Small** - Ausreichend, um mit der Indizierung mehrerer Subgraphen zu beginnen, wird wahrscheinlich erweitert werden müssen.
+- **Standard** - Standardeinstellung, wie sie in den k8s/terraform-Beispielmanifesten verwendet wird.
+- **Medium** - Produktionsindexer, der 100 Subgraphen und 200-500 Anfragen pro Sekunde unterstützt.
+- **Large** - Vorbereitet, um alle derzeit verwendeten Subgraphen zu indizieren und Anfragen für den entsprechenden Verkehr zu bedienen.
-| Setup | Postgres
(CPUs) | Postgres
(memory in GBs) | Postgres
(disk in TBs) | VMs
(CPUs) | VMs
(memory in GBs) |
+| Konfiguration | Postgres
(CPUs) | Postgres
(Speicher in GB) | Postgres
(Festplatte in TB) | VMs
(CPUs) | VMs
(Speicher in GB) |
| --- | :-: | :-: | :-: | :-: | :-: |
| Small | 4 | 8 | 1 | 4 | 16 |
| Standard | 8 | 30 | 1 | 12 | 48 |
| Medium | 16 | 64 | 2 | 32 | 64 |
| Large | 72 | 468 | 3.5 | 48 | 184 |
-### What are some basic security precautions an Indexer should take?
+### Was sind einige grundlegende Sicherheitsvorkehrungen, die ein Indexierer treffen sollte?
-- **Operator wallet** - Setting up an operator wallet is an important precaution because it allows an Indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See [Stake in Protocol](/indexing/overview/#stake-in-the-protocol) for instructions.
+- **Operator Wallet** - Die Einrichtung einer Operator Wallet ist eine wichtige Vorsichtsmaßnahme, da sie es einem Indexierer ermöglicht, eine Trennung zwischen seinen Schlüsseln, die den Einsatz kontrollieren, und den Schlüsseln, die für den täglichen Betrieb zuständig sind, aufrechtzuerhalten. Siehe [Stake im Protocol](/indexing/overview/#stake-in-the-protocol) für Anweisungen.
-- **Firewall** - Only the Indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the Indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed.
+- **Firewall** - Nur der Indexierer-Dienst muss öffentlich zugänglich gemacht werden, und es sollte besonders darauf geachtet werden, dass die Admin-Ports und der Datenbankzugriff gesperrt werden: der Graph Node JSON-RPC-Endpunkt (Standard-Port: 8030), der Indexer-Management-API-Endpunkt (Standard-Port: 18000) und der Postgres-Datenbank-Endpunkt (Standard-Port: 5432) sollten nicht öffentlich zugänglich sein.
-## Infrastructure
+## Infrastruktur
-At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network.
+Im Zentrum der Infrastruktur eines Indexierers steht der Graph Node, der die indizierten Netzwerke überwacht, Daten gemäß einer Subgraph-Definition extrahiert und lädt und sie als [GraphQL API](/about/#how-the-graph-works) bereitstellt. Der Graph Node muss mit einem Endpunkt verbunden sein, der Daten aus jedem indizierten Netzwerk ausgibt; ein IPFS-Knoten für die Datenbeschaffung; eine PostgreSQL-Datenbank für die Speicherung; und Indexer-Komponenten, die seine Interaktionen mit dem Netzwerk erleichtern.
-- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions.
+- **PostgreSQL-Datenbank** - Der Hauptspeicher für den Graphenknoten, in dem die Subgraphen-Daten gespeichert werden. Der Indexer-Dienst und der Agent verwenden die Datenbank auch zum Speichern von Statuskanaldaten, Kostenmodellen, Indizierungsregeln und Zuordnungsaktionen.
-- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
+- **Datenendpunkt** - Bei EVM-kompatiblen Netzwerken muss der Graph Node mit einem Endpunkt verbunden sein, der eine EVM-kompatible JSON-RPC-API bereitstellt. Dabei kann es sich um einen einzelnen Client handeln oder um ein komplexeres Setup, das die Last auf mehrere Clients verteilt. Es ist wichtig, sich darüber im Klaren zu sein, dass bestimmte Subgraphen besondere Client-Fähigkeiten erfordern, wie z. B. den Archivmodus und/oder die Paritätsverfolgungs-API.
-- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+- **IPFS-Knoten (Version kleiner als 5)** - Die Metadaten für die Subgraph-Bereitstellung werden im IPFS-Netzwerk gespeichert. Der Graph Node greift in erster Linie auf den IPFS-Knoten während der Bereitstellung des Subgraphen zu, um das Subgraphen-Manifest und alle verknüpften Dateien zu holen. Netzwerk-Indizierer müssen keinen eigenen IPFS-Knoten hosten, ein IPFS-Knoten für das Netzwerk wird unter https://ipfs.network.thegraph.com gehostet.
-- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway.
+- **Indexierer-Dienst** - Erledigt alle erforderlichen externen Kommunikationen mit dem Netz. Teilt Kostenmodelle und Indizierungsstatus, leitet Abfrageanfragen von Gateways an einen Graph Node weiter und verwaltet die Abfragezahlungen über Statuskanäle mit dem Gateway.
-- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations.
+- **Indexierer-Agent** - Erleichtert die Interaktionen des Indexierers in der Kette, einschließlich der Registrierung im Netzwerk, der Verwaltung von Subgraph-Einsätzen in seine(n) Graph-Knoten und der Verwaltung von Zuweisungen.
-- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server.
+- **Prometheus Metrics Server** - Die Komponenten Graph Node und Indexierer protokollieren ihre Metriken auf dem Metrics Server.
-Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes.
+Hinweis: Um eine flexible Skalierung zu unterstützen, wird empfohlen, Abfrage- und Indizierungsbelange auf verschiedene Knotengruppen zu verteilen: Abfrageknoten und Indexknoten.
-### Ports overview
+### Übersicht über Ports
-> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC and the Indexer management endpoints detailed below.
+> **Wichtig**: Seien Sie vorsichtig damit, Ports öffentlich zugänglich zu machen - **Verwaltungsports** sollten unter Verschluss gehalten werden. Dies gilt auch für den Graph Node JSON-RPC und die Indexierer-Verwaltungsendpunkte, die im Folgenden beschrieben werden.
-#### Graph Node
+#### Graph-Knoten
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable |
| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+| 8000 | GraphQL HTTP Server
(für Subgraph-Abfragen) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS
(für Subgraphen-Abonnements) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC
(zum Verwalten von Deployments) | / | \--admin-port | - |
+| 8030 | Status der Indizierung von Subgraphen API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus-Metriken | /metrics | \--metrics-port | - |
-#### Indexer Service
+#### Indexer-Service
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable |
| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP server
(for paid subgraph queries) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
-| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
+| 7600 | GraphQL HTTP Server
(für bezahlte Subgraph-Abfragen) | /subgraphs/id/...
/status
/channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
+| 7300 | Prometheus-Metriken | /metrics | \--metrics-port | - |
-#### Indexer Agent
+#### Indexierer-Agent
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| ---- | ---------------------- | ------ | -------------------------- | --------------------------------------- |
-| 8000 | Indexer management API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` |
+| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable |
+| ---- | ----------------------- | ------ | -------------------------- | --------------------------------------- |
+| 8000 | Indexer-Verwaltungs-API | / | \--indexer-management-port | `INDEXER_AGENT_INDEXER_MANAGEMENT_PORT` |
-### Setup server infrastructure using Terraform on Google Cloud
+### Einrichten einer Server-Infrastruktur mit Terraform auf Google Cloud
-> Note: Indexers can alternatively use AWS, Microsoft Azure, or Alibaba.
+> Hinweis: Indexierer können alternativ AWS, Microsoft Azure oder Alibaba nutzen.
-#### Install prerequisites
+#### Installieren Sie die Voraussetzungen
-- Google Cloud SDK
-- Kubectl command line tool
+- Google Cloud-SDK
+- Kubectl-Befehlszeilentool
- Terraform
-#### Create a Google Cloud Project
+#### Erstellen Sie ein Google Cloud-Projekt
-- Clone or navigate to the [Indexer repository](https://github.com/graphprotocol/indexer).
+- Klonen oder navigieren Sie zum [Indexierer-Repository] (https://github.com/graphprotocol/indexer).
-- Navigate to the `./terraform` directory, this is where all commands should be executed.
+- Navigieren Sie zum Verzeichnis `./terraform`, in dem alle Befehle ausgeführt werden sollen.
```sh
cd terraform
```
-- Authenticate with Google Cloud and create a new project.
+- Authentifizieren Sie sich bei Google Cloud und erstellen Sie ein neues Projekt.
```sh
gcloud auth login
@@ -196,9 +196,9 @@ project=
gcloud projects create --enable-cloud-apis $project
```
-- Use the Google Cloud Console's billing page to enable billing for the new project.
+- Verwenden Sie die Abrechnungsseite der Google Cloud Console, um die Abrechnung für das neue Projekt zu aktivieren.
-- Create a Google Cloud configuration.
+- Erstellen Sie eine Google Cloud-Konfiguration.
```sh
proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project")
@@ -208,7 +208,7 @@ gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a
```
-- Enable required Google Cloud APIs.
+- Aktivieren Sie die erforderlichen Google Cloud-APIs.
```sh
gcloud services enable compute.googleapis.com
@@ -217,7 +217,7 @@ gcloud services enable servicenetworking.googleapis.com
gcloud services enable sqladmin.googleapis.com
```
-- Create a service account.
+- Erstellen Sie ein Service-Konto.
```sh
svc_name=
@@ -235,7 +235,7 @@ gcloud projects add-iam-policy-binding $proj_id \
--role roles/editor
```
-- Enable peering between database and Kubernetes cluster that will be created in the next step.
+- Aktivieren Sie das Peering zwischen der Datenbank und dem Kubernetes-Cluster, der im nächsten Schritt erstellt wird.
```sh
gcloud compute addresses create google-managed-services-default \
@@ -243,41 +243,41 @@ gcloud compute addresses create google-managed-services-default \
--purpose=VPC_PEERING \
--network default \
--global \
- --description 'IP Range for peer networks.'
+ --description 'IP Range for peer Networks.'
gcloud services vpc-peerings connect \
--network=default \
--ranges=google-managed-services-default
```
-- Create minimal terraform configuration file (update as needed).
+- Erstellen Sie eine minimale Terraform-Konfigurationsdatei (aktualisieren Sie sie nach Bedarf).
```sh
indexer=
cat > terraform.tfvars <"
+database_password = ""
EOF
```
-#### Use Terraform to create infrastructure
+#### Verwenden Sie Terraform zum Erstellen einer Infrastruktur
-Before running any commands, read through [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) and create a file `terraform.tfvars` in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into `terraform.tfvars`.
+Bevor Sie irgendwelche Befehle ausführen, lesen Sie [variables.tf](https://github.com/graphprotocol/indexer/blob/main/terraform/variables.tf) und erstellen Sie eine Datei `terraform.tfvars` in diesem Verzeichnis (oder ändern Sie die im letzten Schritt erstellte Datei). Für jede Variable, bei der Sie die Standardeinstellung überschreiben oder einen Wert festlegen möchten, geben Sie eine Einstellung in `terraform.tfvars` ein.
-- Run the following commands to create the infrastructure.
+- Führen Sie zum Erstellen der Infrastruktur die folgenden Befehle aus.
```sh
-# Install required plugins
+# Erforderliche Plugins installieren
terraform init
-# View plan for resources to be created
+# Plan für die zu erstellenden Ressourcen anzeigen
terraform plan
-# Create the resources (expect it to take up to 30 minutes)
+# Erstellen Sie die Ressourcen (dies kann bis zu 30 Minuten dauern)
terraform apply
```
-Download credentials for the new cluster into `~/.kube/config` and set it as your default context.
+Laden Sie die Anmeldedaten für den neuen Cluster in `~/.kube/config` herunter und setzen Sie ihn als Standardkontext.
```sh
gcloud container clusters get-credentials $indexer
@@ -285,21 +285,21 @@ kubectl config use-context $(kubectl config get-contexts --output='name'
| grep $indexer)
```
-#### Creating the Kubernetes components for the Indexer
+#### Erstellen der Kubernetes-Komponenten für den Indexierer
-- Copy the directory `k8s/overlays` to a new directory `$dir,` and adjust the `bases` entry in `$dir/kustomization.yaml` so that it points to the directory `k8s/base`.
+- Kopieren Sie das Verzeichnis `k8s/overlays` in ein neues Verzeichnis `$dir,` und passen Sie den Eintrag `bases` in `$dir/kustomization.yaml` so an, dass er auf das Verzeichnis `k8s/base` zeigt.
-- Read through all the files in `$dir` and adjust any values as indicated in the comments.
+- Lesen Sie alle Dateien in `$dir` durch und passen Sie alle Werte wie in den Kommentaren angegeben an.
-Deploy all resources with `kubectl apply -k $dir`.
+Stellen Sie alle Ressourcen mit `kubectl apply -k $dir` bereit.
-### Graph Node
+### Graph-Knoten
-[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint.
+[Graph Node] (https://github.com/graphprotocol/graph-node) ist eine quelloffene Rust-Implementierung, die die Ethereum-Blockchain mit Ereignisquellen versorgt, um einen Datenspeicher deterministisch zu aktualisieren, der über den GraphQL-Endpunkt abgefragt werden kann. Entwickler verwenden Subgraphen zur Definition ihres Schemas und eine Reihe von Mappings zur Umwandlung der von der Blockchain bezogenen Daten. Der Graph Node übernimmt die Synchronisierung der gesamten Kette, die Überwachung auf neue Blöcke und die Bereitstellung über einen GraphQL-Endpunkt.
-#### Getting started from source
+#### Einstieg in den Sourcecode
-#### Install prerequisites
+#### Installieren Sie die Voraussetzungen
- **Rust**
@@ -307,15 +307,15 @@ Deploy all resources with `kubectl apply -k $dir`.
- **IPFS**
-- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed.
+- **Zusätzliche Anforderungen für Ubuntu-Benutzer** - Um einen Graph Node unter Ubuntu zu betreiben, sind möglicherweise einige zusätzliche Pakete erforderlich.
```sh
sudo apt-get install -y clang libpg-dev libssl-dev pkg-config
```
-#### Setup
+#### Konfiguration
-1. Start a PostgreSQL database server
+1. Starten Sie einen PostgreSQL-Datenbankserver
```sh
initdb -D .postgres
@@ -323,9 +323,9 @@ pg_ctl -D .postgres -l logfile start
createdb graph-node
```
-2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build`
+2. Klonen Sie das [Graph-Knoten](https://github.com/graphprotocol/graph-node)-Repo und erstellen Sie den Sourcecode durch Ausführen von `cargo build`
-3. Now that all the dependencies are setup, start the Graph Node:
+3. Nachdem alle Abhängigkeiten eingerichtet sind, starten Sie den Graph Node:
```sh
cargo run -p graph-node --release -- \
@@ -334,48 +334,48 @@ cargo run -p graph-node --release -- \
--ipfs https://ipfs.network.thegraph.com
```
-#### Getting started using Docker
+#### Erste Schritte mit Docker
-#### Prerequisites
+#### Voraussetzungen
-- **Ethereum node** - By default, the docker compose setup will use mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545) to connect to the Ethereum node on your host machine. You can replace this network name and url by updating `docker-compose.yaml`.
+- **Ethereum-Knoten** - Standardmäßig verwendet das Docker-Compose-Setup mainnet: [http://host.docker.internal:8545](http://host.docker.internal:8545), um sich mit dem Ethereum-Knoten auf Ihrem Host-Rechner zu verbinden. Sie können diesen Netzwerknamen und die Url ersetzen, indem Sie die Datei `docker-compose.yaml` aktualisieren.
-#### Setup
+#### Konfiguration
-1. Clone Graph Node and navigate to the Docker directory:
+1. Klonen Sie den Graph-Knoten und navigieren Sie zum Docker-Verzeichnis:
```sh
git clone https://github.com/graphprotocol/graph-node
cd graph-node/docker
```
-2. For linux users only - Use the host IP address instead of `host.docker.internal` in the `docker-compose.yaml `using the included script:
+2. Nur für Linux-Benutzer - Verwenden Sie die Host-IP-Adresse anstelle von `host.docker.internal` in der Datei `docker-compose.yaml ` mit Hilfe des mitgelieferten Skripts:
```sh
./setup.sh
```
-3. Start a local Graph Node that will connect to your Ethereum endpoint:
+3. Starten Sie einen lokalen Graph-Knoten, der sich mit Ihrem Ethereum-Endpunkt verbindet:
```sh
docker-compose up
```
-### Indexer components
+### Indexierer-Komponenten
-To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components:
+Um erfolgreich am Netzwerk teilzunehmen, sind fast ständige Überwachung und Interaktion erforderlich. Daher haben wir eine Reihe von Typescript-Anwendungen entwickelt, um die Teilnahme am Indexierers-Netzwerk zu erleichtern. Es gibt drei Indexierer-Komponenten:
-- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each.
+- **Indexierer-Agent** - Der Agent überwacht das Netzwerk und die eigene Infrastruktur des Indexierers und verwaltet, welche Subgraph-Einsätze indiziert und der Onchain zugewiesen werden und wie viel davon jeweils zugewiesen wird.
-- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways.
+- **Indexierer-Dienst** - Die einzige Komponente, die extern zugänglich gemacht werden muss. Der Dienst leitet Subgraph-Abfragen an den Graph-Knoten weiter, verwaltet Zustandskanäle für Abfragezahlungen und gibt wichtige Informationen zur Entscheidungsfindung an Clients wie die Gateways weiter.
-- **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules.
+- **Indexierer CLI** - Die Befehlszeilenschnittstelle zur Verwaltung des Indexierer-Agenten. Sie ermöglicht Indexern die Verwaltung von Kostenmodellen, manuellen Zuweisungen, Aktionswarteschlangen und Indizierungsregeln.
-#### Getting started
+#### Erste Schritte
-The Indexer agent and Indexer service should be co-located with your Graph Node infrastructure. There are many ways to set up virtual execution environments for your Indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on [Discord](https://discord.gg/graphprotocol)! Remember to [stake in the protocol](/indexing/overview/#stake-in-the-protocol) before starting up your Indexer components!
+Der Indexierer-Agent und der Indexierer-Service sollten sich in Ihrer Graph Node-Infrastruktur befinden. Es gibt viele Möglichkeiten, virtuelle Ausführungsumgebungen für Ihre Indexierer-Komponenten einzurichten; hier erklären wir, wie Sie sie auf Baremetal mit NPM-Paketen oder Source oder über Kubernetes und Docker auf der Google Cloud Kubernetes Engine ausführen. Wenn sich diese Datenbeispiele nicht gut auf Ihre Infrastruktur übertragen lassen, wird es wahrscheinlich einen Community-Leitfaden geben, auf den Sie sich beziehen können. Kommen Sie auf [Discord] (https://discord.gg/graphprotocol) vorbei! Vergessen Sie nicht, [stake in the protocol](/indexing/overview/#stake-in-the-protocol), bevor Sie Ihre Indexierer-Komponenten starten!
-#### From NPM packages
+#### Aus NPM-Paketen
```sh
npm install -g @graphprotocol/indexer-service
@@ -398,7 +398,7 @@ graph indexer connect http://localhost:18000/
graph indexer ...
```
-#### From source
+#### Vom Sourcecode
```sh
# From Repo root directory
@@ -418,16 +418,16 @@ cd packages/indexer-cli
./bin/graph-indexer-cli indexer ...
```
-#### Using docker
+#### Verwenden von Docker
-- Pull images from the registry
+- Ziehen Sie Bilder aus der Registrierung
```sh
docker pull ghcr.io/graphprotocol/indexer-service:latest
docker pull ghcr.io/graphprotocol/indexer-agent:latest
```
-Or build images locally from source
+Oder erstellen Sie Images lokal aus dem Sourcecode
```sh
# Indexer service
@@ -442,24 +442,24 @@ docker build \
-t indexer-agent:latest \
```
-- Run the components
+- Führen Sie die Komponenten aus
```sh
docker run -p 7600:7600 -it indexer-service:latest ...
docker run -p 18000:8000 -it indexer-agent:latest ...
```
-**NOTE**: After starting the containers, the Indexer service should be accessible at [http://localhost:7600](http://localhost:7600) and the Indexer agent should be exposing the Indexer management API at [http://localhost:18000/](http://localhost:18000/).
+**HINWEIS**: Nach dem Start der Container sollte der Indexierer-Dienst unter [http://localhost:7600](http://localhost:7600) erreichbar sein und der Indexierer-Agent sollte die Indexer-Verwaltungs-API unter [http://localhost:18000/](http://localhost:18000/) zur Verfügung stellen.
-#### Using K8s and Terraform
+#### Verwendung von K8s und Terraform
-See the [Setup Server Infrastructure Using Terraform on Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud) section
+Sehen Sie den Abschnitt [Einrichten der Serverinfrastruktur mit Terraform in Google Cloud](/indexing/overview/#setup-server-infrastructure-using-terraform-on-google-cloud)
-#### Usage
+#### Verwendung
-> **NOTE**: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format `COMPONENT_NAME_VARIABLE_NAME`(ex. `INDEXER_AGENT_ETHEREUM`).
+> HINWEIS: Alle Laufzeit-Konfigurationsvariablen können entweder als Parameter auf den Befehl beim Start oder mithilfe von Umgebungsvariablen im Format `COMPONENT_NAME_VARIABLE_NAME`(z. B. `INDEXER_AGENT_ETHEREUM`) angewandt werden.
-#### Indexer agent
+#### Indexierer-Agent
```sh
graph-indexer-agent start \
@@ -488,7 +488,7 @@ graph-indexer-agent start \
| pino-pretty
```
-#### Indexer service
+#### Indexierer-Service
```sh
SERVER_HOST=localhost \
@@ -514,58 +514,58 @@ graph-indexer-service start \
| pino-pretty
```
-#### Indexer CLI
+#### Indexierer-CLI
-The Indexer CLI is a plugin for [`@graphprotocol/graph-cli`](https://www.npmjs.com/package/@graphprotocol/graph-cli) accessible in the terminal at `graph indexer`.
+Das Indexierer-CLI ist ein Plugin für [`@graphprotocol/graph-cli`] (https://www.npmjs.com/package/@graphprotocol/graph-cli), das im Terminal unter `graph indexer` erreichbar ist.
```sh
graph indexer connect http://localhost:18000
graph indexer status
```
-#### Indexer management using Indexer CLI
+#### Indexierer-Verwaltung mit Indexierer-CLI
-The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution.
+Das vorgeschlagene Werkzeug für die Interaktion mit der **Indexierer-Management-API** ist das **Indexierer-CLI**, eine Erweiterung des **Graph CLI**. Der Indexierer-Agent benötigt Input von einem Indexierer, um im Namen des Indexers autonom mit dem Netzwerk zu interagieren. Die Mechanismen zur Definition des Verhaltens des Indexer-Agenten sind der **Zuweisungsmanagement**-Modus und **Indexierungsregeln**. Im automatischen Modus kann ein Indexierer **Indizierungsregeln** verwenden, um seine spezifische Strategie für die Auswahl von Subgraphen anzuwenden, die er indizieren und für die er Abfragen liefern soll. Die Regeln werden über eine GraphQL-API verwaltet, die vom Agenten bereitgestellt wird und als Indexierer Management API bekannt ist. Im manuellen Modus kann ein Indexierer Zuordnungsaktionen über die **Aktionswarteschlange** erstellen und sie explizit genehmigen, bevor sie ausgeführt werden. Im Überwachungsmodus werden **Indizierungsregeln** verwendet, um die **Aktionswarteschlange** zu füllen, und erfordern ebenfalls eine ausdrückliche Genehmigung für die Ausführung.
-#### Usage
+#### Verwendung
-The **Indexer CLI** connects to the Indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here.
+Die **Indexierer-CLI** verbindet sich mit dem Indexierer-Agenten, in der Regel über Port-Forwarding, so dass die CLI nicht auf demselben Server oder Cluster laufen muss. Um Ihnen den Einstieg zu erleichtern und etwas Kontext zu liefern, wird die CLI hier kurz beschrieben.
-- `graph indexer connect ` - Connect to the Indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: `kubectl port-forward pod/ 8000:8000`)
+- `graph indexer connect ` - Verbindet mit der Indexierer-Verwaltungs-API. Typischerweise wird die Verbindung zum Server über Port-Forwarding geöffnet, so dass die CLI einfach aus der Ferne bedient werden kann. (Datenbeispiel: `kubectl port-forward pod/ 8000:8000`)
-- `graph indexer rules get [options] [ ...]` - Get one or more indexing rules using `all` as the `` to get all rules, or `global` to get the global defaults. An additional argument `--merged` can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the Indexer agent.
+- `graph indexer rules get [options] [ ...]` - Holt eine oder mehrere Indizierungsregeln unter Verwendung von `all` als ``, um alle Regeln zu erhalten, oder `global`, um die globalen Standardwerte zu erhalten. Ein zusätzliches Argument `--merged` kann verwendet werden, um anzugeben, dass einsatzspezifische Regeln mit der globalen Regel zusammengeführt werden. Auf diese Weise werden sie im Indexer-Agent angewendet.
-- `graph indexer rules set [options] ...` - Set one or more indexing rules.
+- `graph indexer rules set [options] ...` - Eine oder mehrere Indizierungsregeln setzen.
-- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed.
+- `graph indexer rules start [options] ` - Startet die Indizierung eines Subgraph-Einsatzes, wenn dieser verfügbar ist, und setzt seine `decisionBasis` auf `always`, so dass der Indexierer-Agent immer die Indizierung dieses Einsatzes wählt. Wenn die globale Regel auf `always` gesetzt ist, werden alle verfügbaren Subgraphen im Netzwerk indiziert.
-- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index.
+- `graph indexer rules stop [options] ` - Stoppt die Indizierung eines Einsatzes und setzt seine `decisionBasis` auf never, so dass er diesen Einsatz bei der Entscheidung über die zu indizierenden Einsätze überspringt.
-- `graph indexer rules maybe [options] ` — Set the `decisionBasis` for a deployment to `rules`, so that the Indexer agent will use indexing rules to decide whether to index this deployment.
+- `graph indexer rules maybe [options] ` - Setzt die `decisionBasis` für ein Deployment auf `rules`, so dass der Indexierer-Agent Indizierungsregeln verwendet, um zu entscheiden, ob dieses Deployment indiziert werden soll.
-- `graph indexer actions get [options] ` - Fetch one or more actions using `all` or leave `action-id` empty to get all actions. An additional argument `--status` can be used to print out all actions of a certain status.
+- `graph indexer actions get [options] ` - Holt eine oder mehrere Aktionen mit `all` oder lässt `action-id` leer, um alle Aktionen zu erhalten. Ein zusätzliches Argument `--status` kann verwendet werden, um alle Aktionen mit einem bestimmten Status auszugeben.
-- `graph indexer action queue allocate ` - Queue allocation action
+- `graph indexer action queue allocate ` - Aktion zur Warteschlangenzuordnung
-- `graph indexer action queue reallocate ` - Queue reallocate action
+- `graph indexer action queue reallocate ` - Aktion zur Neuzuweisung der Warteschlange
-- `graph indexer action queue unallocate ` - Queue unallocate action
+- `graph indexer action queue unallocate ` - Aktion zum Aufheben der Warteschlangenzuordnung
-- `graph indexer actions cancel [ ...]` - Cancel all action in the queue if id is unspecified, otherwise cancel array of id with space as separator
+- `graph indexer actions cancel [ ...]` - Abbrechen aller Aktionen in der Warteschlange, wenn id nicht angegeben ist, sonst Abbrechen eines Arrays von id mit Leerzeichen als Trennzeichen
-- `graph indexer actions approve [ ...]` - Approve multiple actions for execution
+- `graph indexer actions approve [ ...]` - Mehrere Aktionen zur Ausführung freigeben
-- `graph indexer actions execute approve` - Force the worker to execute approved actions immediately
+- `graph indexer actions execute approve` - Erzwingt die sofortige Ausführung genehmigter Aktionen durch den Worker
-All commands which display rules in the output can choose between the supported output formats (`table`, `yaml`, and `json`) using the `-output` argument.
+Alle Befehle, die Regeln in der Ausgabe anzeigen, können zwischen den unterstützten Ausgabeformaten (`table`, `yaml` und `json`) mit dem Argument `-output` wählen.
-#### Indexing rules
+#### Indizierungsregeln
-Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing.
+Indizierungsregeln können entweder als globale Standardwerte oder für bestimmte Subgraph-Einsätze unter Verwendung ihrer IDs angewendet werden. Die Felder `deployment` und `decisionBasis` sind obligatorisch, während alle anderen Felder optional sind. Wenn eine Indizierungsregel `rules` als `decisionBasis` hat, dann vergleicht der Indexierer-Agent die Schwellenwerte dieser Regel, die nicht Null sind, mit den Werten, die aus dem Netzwerk für den entsprechenden Einsatz geholt wurden. Wenn der Subgraph-Einsatz Werte über (oder unter) einem der Schwellenwerte hat, wird er für die Indizierung ausgewählt.
-For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`.
+Wenn zum Beispiel die globale Regel einen `minStake` von **5** (GRT) hat, wird jeder Einsatz von Subgraphen, dem mehr als 5 (GRT) zugewiesen wurden, indiziert. Zu den Schwellenwertregeln gehören `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, und `minAverageQueryFees`.
-Data model:
+Datenmodell:
```graphql
type IndexingRule {
@@ -599,7 +599,7 @@ IndexingDecisionBasis {
}
```
-Example usage of indexing rule:
+Beispiel für die Verwendung der Indizierungsregel:
```
graph indexer rules offchain QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK
@@ -611,20 +611,20 @@ graph indexer rules stop QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK
graph indexer rules delete QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK
```
-#### Actions queue CLI
+#### Befehlszeilenschnittstelle (CLI) für die Aktionswarteschlange
-The indexer-cli provides an `actions` module for manually working with the action queue. It uses the **Graphql API** hosted by the indexer management server to interact with the actions queue.
+Das indexierer-cli bietet ein `actions`-Modul für die manuelle Arbeit mit der Aktionswarteschlange. Es verwendet die **Graphql-API**, die vom Indexierer-Verwaltungsserver gehostet wird, um mit der Aktions-Warteschlange zu interagieren.
-The action execution worker will only grab items from the queue to execute if they have `ActionStatus = approved`. In the recommended path actions are added to the queue with ActionStatus = queued, so they must then be approved in order to be executed onchain. The general flow will look like:
+Der Action Execution Worker holt sich nur dann Elemente aus der Warteschlange, um sie auszuführen, wenn sie den Status `ActionStatus = approved` haben. Im empfohlenen Pfad werden Aktionen der Warteschlange mit ActionStatus = queued hinzugefügt, so dass sie dann genehmigt werden müssen, um in der Kette ausgeführt zu werden. Der allgemeine Ablauf sieht dann wie folgt aus:
-- Action added to the queue by the 3rd party optimizer tool or indexer-cli user
-- Indexer can use the `indexer-cli` to view all queued actions
-- Indexer (or other software) can approve or cancel actions in the queue using the `indexer-cli`. The approve and cancel commands take an array of action ids as input.
-- The execution worker regularly polls the queue for approved actions. It will grab the `approved` actions from the queue, attempt to execute them, and update the values in the db depending on the status of execution to `success` or `failed`.
-- If an action is successful the worker will ensure that there is an indexing rule present that tells the agent how to manage the allocation moving forward, useful when taking manual actions while the agent is in `auto` or `oversight` mode.
-- The indexer can monitor the action queue to see a history of action execution and if needed re-approve and update action items if they failed execution. The action queue provides a history of all actions queued and taken.
+- Aktion, die vom Drittanbieter-Optimierungstool oder vom indexer-cli-Benutzer zur Warteschlange hinzugefügt wurde
+- Indexierer kann die `indexer-cli` verwenden, um alle in der Warteschlange stehenden Aktionen zu sehen
+- Indexierer (oder andere Software) kann Aktionen in der Warteschlange mit Hilfe des `indexer-cli` genehmigen oder abbrechen. Die Befehle approve und cancel nehmen ein Array von Aktions-Ids als Eingabe.
+- Der Ausführungsworker fragt die Warteschlange regelmäßig nach genehmigten Aktionen ab. Er holt die `approved` Aktionen aus der Warteschlange, versucht, sie auszuführen, und aktualisiert die Werte in der Datenbank je nach Ausführungsstatus auf `success` oder `failed`.
+- Ist eine Aktion erfolgreich, stellt der Worker sicher, dass eine Indizierungsregel vorhanden ist, die dem Agenten mitteilt, wie er die Zuweisung in Zukunft verwalten soll. Dies ist nützlich, wenn manuelle Aktionen durchgeführt werden, während sich der Agent im `auto`- oder `oversight`-Modus befindet.
+- Der Indexierer kann die Aktionswarteschlange überwachen, um einen Überblick über die Ausführung von Aktionen zu erhalten und bei Bedarf Aktionen, deren Ausführung fehlgeschlagen ist, erneut zu genehmigen und zu aktualisieren. Die Aktionswarteschlange bietet einen Überblick über alle in der Warteschlange stehenden und ausgeführten Aktionen.
-Data model:
+Datenmodell:
```graphql
Type ActionInput {
@@ -657,7 +657,7 @@ ActionType {
}
```
-Example usage from source:
+Verwendungsbeispiel aus dem Sourcecode:
```bash
graph indexer actions get all
@@ -677,141 +677,141 @@ graph indexer actions approve 1 3 5
graph indexer actions execute approve
```
-Note that supported action types for allocation management have different input requirements:
+Beachten Sie, dass unterstützte Aktionstypen für das Allokationsmanagement unterschiedliche Eingabeanforderungen haben:
-- `Allocate` - allocate stake to a specific subgraph deployment
+- `Allocate` - Zuweisung eines Einsatzes für einen bestimmten Einsatz von Subgraphen
- - required action params:
+ - erforderliche Aktionsparameter:
- deploymentID
- amount
-- `Unallocate` - close allocation, freeing up the stake to reallocate elsewhere
+- `Unallocate` - Beendigung der Zuweisung, wodurch der Einsatz für eine andere Zuweisung frei wird
- - required action params:
+ - erforderliche Aktionsparameter:
- allocationID
- deploymentID
- - optional action params:
+ - optionale Aktionsparameter:
- poi
- - force (forces using the provided POI even if it doesn’t match what the graph-node provides)
+ - force (erzwingt die Verwendung des bereitgestellten POI, auch wenn es nicht mit dem übereinstimmt, was der Graph-Knoten bereitstellt)
-- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment
+- `Reallocate` - Zuordnung atomar schließen und eine neue Zuordnung für denselben Einsatz von Subgraphen öffnen
- - required action params:
+ - erforderliche Aktionsparameter:
- allocationID
- deploymentID
- amount
- - optional action params:
+ - optionale Aktionsparameter:
- poi
- - force (forces using the provided POI even if it doesn’t match what the graph-node provides)
+ - force (erzwingt die Verwendung des bereitgestellten POI, auch wenn es nicht mit dem übereinstimmt, was der Graph-Knoten bereitstellt)
-#### Cost models
+#### Kostenmodelle
-Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
+Kostenmodelle ermöglichen eine dynamische Preisgestaltung für Abfragen auf der Grundlage von Markt- und Abfrageattributen. Der Indexierer-Service teilt ein Kostenmodell mit den Gateways für jeden Subgraphen, für den sie beabsichtigen, auf Anfragen zu antworten. Die Gateways wiederum nutzen das Kostenmodell, um Entscheidungen über die Auswahl der Indexer pro Anfrage zu treffen und die Bezahlung mit den ausgewählten Indexern auszuhandeln.
#### Agora
-The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query.
+Die Agora-Sprache bietet ein flexibles Format zur Deklaration von Kostenmodellen für Abfragen. Ein Agora-Preismodell ist eine Folge von Anweisungen, die für jede Top-Level-Abfrage in einer GraphQL-Abfrage nacheinander ausgeführt werden. Für jede Top-Level-Abfrage bestimmt die erste Anweisung, die ihr entspricht, den Preis für diese Abfrage.
-A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression.
+Eine Anweisung besteht aus einem Prädikat, das zum Abgleich von GraphQL-Abfragen verwendet wird, und einem Kostenausdruck, der bei der Auswertung die Kosten in dezimalen GRT ausgibt. Werte in der benannten Argumentposition einer Abfrage können im Prädikat erfasst und im Ausdruck verwendet werden. Globale Werte können auch gesetzt und durch Platzhalter in einem Ausdruck ersetzt werden.
-Example cost model:
+Beispielkostenmodell:
```
-# This statement captures the skip value,
-# uses a boolean expression in the predicate to match specific queries that use `skip`
-# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global
+# Diese Anweisung erfasst den Wert „skip“,
+# verwendet einen booleschen Ausdruck im Prädikat, um mit bestimmten Abfragen übereinzustimmen, die `skip` verwenden
+# und einen Kostenausdruck, um die Kosten auf der Grundlage des `skip`-Wertes und des globalen SYSTEM_LOAD-Wertes zu berechnen
query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD;
-# This default will match any GraphQL expression.
-# It uses a Global substituted into the expression to calculate cost
+# Diese Vorgabe passt auf jeden GraphQL-Ausdruck.
+# Sie verwendet ein Global, das in den Ausdruck eingesetzt wird, um die Kosten zu berechnen
default => 0.1 * $SYSTEM_LOAD;
```
-Example query costing using the above model:
+Beispiel für eine Abfragekostenberechnung unter Verwendung des obigen Modells:
-| Query | Price |
+| Abfrage | Preis |
| ---------------------------------------------------------------------------- | ------- |
| { pairs(skip: 5000) { id } } | 0.5 GRT |
| { tokens { symbol } } | 0.1 GRT |
| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT |
-#### Applying the cost model
+#### Anwendung des Kostenmodells
-Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them.
+Kostenmodelle werden über die Indexierer-CLI angewendet, die sie zum Speichern in der Datenbank an die Indexierer-Verwaltungs-API des Indexierer-Agenten übergibt. Der Indexierer-Service holt sie dann ab und stellt Gateways die Kostenmodelle zur Verfügung, jedesmal wenn sie danach fragen.
```sh
indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }'
indexer cost set model my_model.agora
```
-## Interacting with the network
+## Interaktion mit dem Netzwerk
-### Stake in the protocol
+### Einsatz im Protokoll
-The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions.
+Die ersten Schritte zur Teilnahme am Netzwerk als Indexierer sind die Genehmigung des Protokolls, der Einsatz von Geldern und (optional) die Einrichtung einer Betreiberadresse für die täglichen Interaktionen mit dem Protokoll.
-> Note: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools).
+> Hinweis: In dieser Anleitung wird Remix für die Interaktion mit dem Vertrag verwendet, aber Sie können auch das Tool Ihrer Wahl verwenden ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/) und [MyCrypto](https://www.mycrypto.com/account) sind einige andere bekannte Tools).
-Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexing/overview/#indexer-components) can be started up and begin their interactions with the network.
+Sobald ein Indexer GRT im Protokoll verankert hat, können die [Indexierer-Komponenten](/indexing/overview/#indexer-components) gestartet werden und ihre Interaktionen mit dem Netzwerk beginnen.
-#### Approve tokens
+#### Genehmigen Sie Token
-1. Open the [Remix app](https://remix.ethereum.org/) in a browser
+1. Öffnen Sie die [Remix-App] (https://remix.ethereum.org/) in einem Browser
-2. In the `File Explorer` create a file named **GraphToken.abi** with the [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json).
+2. Erstellen Sie im `File Explorer` eine Datei mit dem Namen **GraphToken.abi** mit dem [Token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json).
-3. With `GraphToken.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface.
+3. Wählen Sie die Datei `GraphToken.abi` aus und öffnen Sie sie im Editor. Wechseln Sie in der Remix-Benutzeroberfläche zum Abschnitt `Deploy and run transactions`.
-4. Under environment select `Injected Web3` and under `Account` select your Indexer address.
+4. Wählen Sie unter environment die Option `Injected Web3` und unter `Account` die Adresse Ihres Indexierers aus.
-5. Set the GraphToken contract address - Paste the GraphToken contract address (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) next to `At Address` and click the `At address` button to apply.
+5. Legen Sie die GraphToken-Vertragsadresse fest - Fügen Sie die GraphToken-Vertragsadresse (`0xc944E90C64B2c07662A292be6244BDf05Cda44a7`) neben `At Address` ein und klicken Sie zum Anwenden auf die Schaltfläche `At address`.
-6. Call the `approve(spender, amount)` function to approve the Staking contract. Fill in `spender` with the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) and `amount` with the tokens to stake (in wei).
+6. Rufen Sie die Funktion `approve(spender, amount)` auf, um den Einsatzvertrag zu genehmigen. Geben Sie in `spender` die Adresse des Einsatzvertrags (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) und in `amount` die zu setzenden Token (in wei) ein.
-#### Stake tokens
+#### Stake-Token
-1. Open the [Remix app](https://remix.ethereum.org/) in a browser
+1. Öffnen Sie die [Remix-App] (https://remix.ethereum.org/) in einem Browser
-2. In the `File Explorer` create a file named **Staking.abi** with the staking ABI.
+2. Erstellen Sie im `File Explorer` eine Datei mit dem Namen **Staking.abi** mit dem Staking-ABI.
-3. With `Staking.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface.
+3. Wählen Sie die Datei `Staking.abi` aus und öffnen Sie sie im Editor. Wechseln Sie in der Remix-Benutzeroberfläche zum Abschnitt `Deploy and run transactions`.
-4. Under environment select `Injected Web3` and under `Account` select your Indexer address.
+4. Wählen Sie unter environment die Option `Injected Web3` und unter `Account` die Adresse Ihres Indexierers aus.
-5. Set the Staking contract address - Paste the Staking contract address (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) next to `At Address` and click the `At address` button to apply.
+5. Legen Sie die Adresse des Abtretungsvertrags fest - Fügen Sie die Adresse des Abtretungsvertrags (`0xF55041E37E12cD407ad00CE2910B8269B01263b9`) neben `At Address` ein und klicken Sie auf die Schaltfläche `At address`, um sie anzuwenden.
-6. Call `stake()` to stake GRT in the protocol.
+6. Rufen Sie `stake()` auf, um GRT in das Protokoll aufzunehmen.
-7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address.
+7. (Optional) Indexierer können eine andere Adresse als Operator für ihre Indexer-Infrastruktur genehmigen, um die Schlüssel, die die Fonds kontrollieren, von denen zu trennen, die alltägliche Aktionen wie die Zuweisung auf Subgraphen und die Bedienung (bezahlter) Abfragen durchführen. Um den Betreiber zu setzen, rufen Sie `setOperator()` mit der Betreiberadresse auf.
-8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks.
+8. (Optional) Um die Verteilung von Belohnungen zu kontrollieren und Delegatoren strategisch anzulocken, können Indexierer ihre Delegationsparameter aktualisieren, indem sie ihren `indexingRewardCut` (Teile pro Million), `queryFeeCut` (Teile pro Million) und `cooldownBlocks` (Anzahl der Blöcke) aktualisieren. Dazu rufen Sie `setDelegationParameters()` auf. Das folgende Beispiel stellt den `queryFeeCut` so ein, dass 95% der Abfragerabatte an den Indexierer und 5% an die Delegatoren verteilt werden, stellt den `indexingRewardCut` so ein, dass 60% der Indexierungsbelohnungen an den Indexierer und 40% an die Delegatoren verteilt werden, und stellt die `cooldownBlocks` Periode auf 500 Blöcke.
```
setDelegationParameters(950000, 600000, 500)
```
-### Setting delegation parameters
+### Einstellung der Delegationsparameter
-The `setDelegationParameters()` function in the [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) is essential for Indexers, allowing them to set parameters that define their interactions with Delegators, influencing their reward sharing and delegation capacity.
+Die Funktion `setDelegationParameters()` im [Staking Contract] (https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) ist für Indexierer von entscheidender Bedeutung, da sie es ihnen ermöglicht, Parameter zu setzen, die ihre Interaktion mit Delegatoren definieren und ihre Reward-Aufteilung und Delegationskapazität beeinflussen.
-### How to set delegation parameters
+### Festlegen der Delegationsparameter
-To set the delegation parameters using Graph Explorer interface, follow these steps:
+Gehen Sie wie folgt vor, um die Delegationsparameter über die Graph Explorer-Schnittstelle einzustellen:
-1. Navigate to [Graph Explorer](https://thegraph.com/explorer/).
-2. Connect your wallet. Choose multisig (such as Gnosis Safe) and then select mainnet. Note: You will need to repeat this process for Arbitrum One.
-3. Connect the wallet you have as a signer.
-4. Navigate to the 'Settings' section and select 'Delegation Parameters'. These parameters should be configured to achieve an effective cut within the desired range. Upon entering values in the provided input fields, the interface will automatically calculate the effective cut. Adjust these values as necessary to attain the desired effective cut percentage.
-5. Submit the transaction to the network.
+1. Navigieren Sie zu [Graph Explorer] (https://thegraph.com/explorer/).
+2. Verbinden Sie Ihre Wallet. Wählen Sie Multisig (z. B. Gnosis Safe) und dann Mainnet aus. Hinweis: Sie müssen diesen Vorgang für Arbitrum One wiederholen.
+3. Verbinden Sie die Wallet, die Sie als Unterzeichner haben.
+4. Navigieren Sie zum Abschnitt 'Settings' und wählen Sie 'Delegation Parameters'. Diese Parameter sollten so konfiguriert werden, dass eine effektive Kürzung innerhalb des gewünschten Bereichs erreicht wird. Nach Eingabe der Werte in die vorgesehenen Eingabefelder berechnet die Schnittstelle automatisch den effektiven Anteil. Passen Sie diese Werte nach Bedarf an, um den gewünschten Prozentsatz der effektiven Kürzung zu erreichen.
+5. Übermitteln Sie die Transaktion an das Netz.
-> Note: This transaction will need to be confirmed by the multisig wallet signers.
+> Hinweis: Diese Transaktion muss von den Unterzeichnern der Multisig-Wallets bestätigt werden.
-### The life of an allocation
+### Die Lebensdauer einer Zuweisung
-After being created by an Indexer a healthy allocation goes through two states.
+Nachdem sie von einem Indexer erstellt wurde, durchläuft eine ordnungsgemäße Zuordnung zwei Zustände.
-- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules.
+- **Aktiv** - Sobald eine Zuweisung in der Kette erstellt wurde ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)), wird sie als **aktiv** betrachtet. Ein Teil des eigenen und/oder delegierten Einsatzes des Indexierers wird einem Subgraph-Einsatz zugewiesen, was ihm erlaubt, Rewards für die Indizierung zu beanspruchen und Abfragen für diesen Subgraph-Einsatz zu bedienen. Der Indexierer-Agent verwaltet die Erstellung von Zuweisungen basierend auf den Indexierer-Regeln.
-- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)).
+- **Geschlossen** - Ein Indexierer kann eine Zuweisung schließen, sobald 1 Epoche vergangen ist ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) oder sein Indexierer-Agent schließt die Zuweisung automatisch nach der **maxAllocationEpochs** (derzeit 28 Tage). Wenn eine Zuweisung mit einem gültigen Indizierungsnachweis (POI) geschlossen wird, werden die Rewards für die Indizierung an den Indexierer und seine Delegatoren verteilt ([lweitere Informationen](/indexing/overview/#how-are-indexing-rewards-distributed)).
-Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically.
+Indexierern wird empfohlen, die Offchain-Synchronisierungsfunktionalität zu nutzen, um den Einsatz von Subgraphen mit dem Chainhead zu synchronisieren, bevor die Zuweisung Onchain erstellt wird. Diese Funktion ist besonders nützlich für Subgraphen, bei denen die Synchronisierung länger als 28 Epochen dauert oder die Gefahr eines unbestimmten Fehlers besteht.
diff --git a/website/src/pages/de/indexing/supported-network-requirements.mdx b/website/src/pages/de/indexing/supported-network-requirements.mdx
index 72e36248f68c..a5f663f3db4a 100644
--- a/website/src/pages/de/indexing/supported-network-requirements.mdx
+++ b/website/src/pages/de/indexing/supported-network-requirements.mdx
@@ -6,7 +6,7 @@ title: Unterstützte Netzwerkanforderungen
| --- | --- | --- | :-: |
| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal)
[Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU
Ubuntu 22.04
16GB+ RAM
>= 8 TiB NVMe SSD
_last updated August 2023_ | ✅ |
| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 5 TiB NVMe SSD
_last updated August 2023_ | ✅ |
-| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)
[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preffered)
_last updated 14th May 2024_ | ✅ |
+| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)
[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/geth/baremetal)
[GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/base/geth/docker) | 8+ core CPU
Debian 12/Ubuntu 22.04
16 GB RAM
>= 4.5TB (NVME preferred)
_last updated 14th May 2024_ | ✅ |
| Binance | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/binance/erigon/baremetal) | 8 core / 16 threads CPU
Ubuntu 22.04
>=32 GB RAM
>= 14 TiB NVMe SSD
_last updated 22nd June 2024_ | ✅ |
| Celo | [Docker Guide](https://docs.infradao.com/archive-nodes-101/celo/docker) | 4 core / 8 threads CPU
Ubuntu 22.04
16GB+ RAM
>= 2 TiB NVMe SSD
_last updated August 2023_ | ✅ |
| Ethereum | [Docker Guide](https://docs.infradao.com/archive-nodes-101/ethereum/erigon/docker) | Höhere Taktfrequenz im Vergleich zur Kernanzahl
Ubuntu 22.04
16GB+ RAM
>=3TB (NVMe recommended)
_last updated August 2023_ | ✅ |
diff --git a/website/src/pages/de/indexing/tap.mdx b/website/src/pages/de/indexing/tap.mdx
index 13fa3c754e0d..a3eec839d931 100644
--- a/website/src/pages/de/indexing/tap.mdx
+++ b/website/src/pages/de/indexing/tap.mdx
@@ -1,21 +1,21 @@
---
-title: TAP-Migrationsleitfaden
+title: GraphTally Guide
---
-Erfahren Sie mehr über das neue Zahlungssystem von The Graph, **Timeline Aggregation Protocol, TAP**. Dieses System bietet schnelle, effiziente Mikrotransaktionen mit minimiertem Vertrauen.
+Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust.
## Überblick
-[TAP] (https://docs.rs/tap_core/latest/tap_core/index.html) ist ein direkter Ersatz für das derzeitige Scalar-Zahlungssystem. Es bietet die folgenden Hauptfunktionen:
+GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features:
- Effiziente Abwicklung von Mikrozahlungen.
- Fügt den Onchain-Transaktionen und -Kosten eine weitere Ebene der Konsolidierung hinzu.
- Ermöglicht den Indexern die Kontrolle über Eingänge und Zahlungen und garantiert die Bezahlung von Abfragen.
- Es ermöglicht dezentralisierte, vertrauenslose Gateways und verbessert die Leistung des `indexer-service` für mehrere Absender.
-## Besonderheiten
+### Besonderheiten
-TAP ermöglicht es einem Sender, mehrere Zahlungen an einen Empfänger zu leisten, **TAP Receipts**, der diese Zahlungen zu einer einzigen Zahlung zusammenfasst, einem **Receipt Aggregate Voucher**, auch bekannt als **RAV**. Diese aggregierte Zahlung kann dann auf der Blockchain verifiziert werden, wodurch sich die Anzahl der Transaktionen verringert und der Zahlungsvorgang vereinfacht wird.
+GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process.
Für jede Abfrage sendet Ihnen das Gateway eine „signierte Quittung“, die in Ihrer Datenbank gespeichert wird. Dann werden diese Abfragen von einem „Tap-Agent“ durch eine Anfrage aggregiert. Anschließend erhalten Sie ein RAV. Sie können ein RAV aktualisieren, indem Sie es mit neueren Quittungen senden, wodurch ein neues RAV mit einem höheren Wert erzeugt wird.
@@ -59,14 +59,14 @@ Solange Sie `tap-agent` und `indexer-agent` ausführen, wird alles automatisch a
| Unterzeichner | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` |
| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` |
-### Anforderungen
+### Voraussetzungen
-Zusätzlich zu den typischen Anforderungen für den Betrieb eines Indexers benötigen Sie einen `tap-escrow-subgraph`-Endpunkt, um TAP-Aktualisierungen abzufragen. Sie können The Graph Network zur Abfrage verwenden oder sich selbst auf Ihrem `graph-node` hosten.
+In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`.
-- [Graph TAP Arbitrum Sepolia subgraph (für The Graph Testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD)
-- [Graph TAP Arbitrum One subgraph (für The Graph Mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1)
+- [Graph TAP Arbitrum Sepolia Subgraph (für The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD)
+- [Graph TAP Arbitrum One Subgraph (für The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1)
-> Hinweis: `indexer-agent` übernimmt derzeit nicht die Indizierung dieses Subgraphen, wie es bei der Bereitstellung von Netzwerk-Subgraphen der Fall ist. Daher müssen Sie ihn manuell indizieren.
+> Hinweis: `indexer-agent` übernimmt derzeit nicht die Indizierung dieses Subgraphen, wie es beim Einsatz von Subgraphen im Netzwerk der Fall ist. Infolgedessen müssen Sie ihn manuell indizieren.
## Migrationsleitfaden
@@ -79,7 +79,7 @@ Die erforderliche Softwareversion finden Sie [hier](https://github.com/graphprot
1. **Indexer-Agent**
- Folgen Sie dem [gleichen Prozess](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components).
- - Geben Sie das neue Argument `--tap-subgraph-endpoint` an, um die neuen TAP-Codepfade zu aktivieren und die Einlösung von TAP-RAVs zu ermöglichen.
+ - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs.
2. **Indexer-Service**
@@ -104,8 +104,8 @@ Für eine minimale Konfiguration verwenden Sie die folgende Vorlage:
# Einige der nachstehenden Konfigurationswerte sind globale Graphnetzwerkwerte, die Sie hier finden können:
#
#
-# Pro-Tipp: Wenn Sie einige Werte aus der Umgebung in diese Konfiguration laden müssen,
-# können Sie sie mit Umgebungsvariablen überschreiben. Als Datenbeispiel kann folgendes ersetzt werden
+# Pro-Tipp: Wenn Sie einige Werte aus der Umgebung in diese Konfiguration laden müssen, können Sie
+# können Sie mit Umgebungsvariablen überschreiben. Zum Beispiel kann das Folgende ersetzt werden
# durch [PREFIX]_DATABASE_POSTGRESURL, wobei PREFIX `INDEXER_SERVICE` oder `TAP_AGENT` sein kann:
#
# [Datenbank]
@@ -116,8 +116,8 @@ indexer_address = „0x1111111111111111111111111111111111111111“
operator_mnemonic = „celery smart tip orange scare van steel radio dragon joy alarm crane“
[database]
-# Die URL der Postgres-Datenbank, die für die Indexer-Komponenten verwendet wird. Die gleiche Datenbank,
-# die auch vom `indexer-agent` verwendet wird. Es wird erwartet, dass `indexer-agent`
+# Die URL der Postgres-Datenbank, die für die Indexer-Komponenten verwendet wird. Die gleiche Datenbank
+# die vom `indexer-agent` verwendet wird. Es wird erwartet, dass `indexer-agent` die
# die notwendigen Tabellen erstellt.
postgres_url = „postgres://postgres@postgres:5432/postgres“
@@ -128,18 +128,18 @@ query_url = „“
status_url = „“
[subgraphs.network]
-# Abfrage-URL für den Graph Network Subgraph.
+# Abfrage-URL für den Graph-Netzwerk-Subgraphen.
query_url = „“
-# Optional, Einsatz, nach dem im lokalen `graph-node` gesucht wird, falls er lokal indiziert ist.
-# Es wird empfohlen, den Subgraphen lokal zu indizieren.
+# Optional, Einsatz, der im lokalen `graph-node` zu suchen ist, falls lokal indiziert.
+# Die lokale Indizierung des Subgraphen wird empfohlen.
# HINWEIS: Verwenden Sie nur `query_url` oder `deployment_id`.
deployment_id = „Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa“
[subgraphs.escrow]
-# Abfrage-URL für den Subgraphen „Escrow“.
+# Abfrage-URL für den Escrow-Subgraphen.
query_url = „“
-# Optional, Einsatz, nach dem im lokalen `graph-node` gesucht wird, falls er lokal indiziert ist.
-# Es wird empfohlen, den Subgraphen lokal zu indizieren.
+# Optional, Einsatz für die Suche im lokalen `Graph-Knoten`, falls lokal indiziert.
+# Die lokale Indizierung des Subgraphen wird empfohlen.
# HINWEIS: Verwenden Sie nur `query_url` oder `deployment_id`.
deployment_id = „Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa“
@@ -153,9 +153,9 @@ receipts_verifier_address = „0x2222222222222222222222222222222222222222“
# Spezifische Konfigurationen für tap-agent #
########################################
[tap]
-# Dies ist die Höhe der Gebühren, die Sie bereit sind, zu einem bestimmten Zeitpunkt zu riskieren. Zum Beispiel,
+# Dies ist die Höhe der Gebühren, die Sie bereit sind, zu einem bestimmten Zeitpunkt zu riskieren. Zum Beispiel.
# wenn der Sender lange genug keine RAVs mehr liefert und die Gebühren diesen Betrag
-# übersteigt, wird der Indexer-Service keine Anfragen mehr vom Absender annehmen
+# Betrag übersteigt, wird der Indexer-Service keine Anfragen mehr vom Absender annehmen
# bis die Gebühren aggregiert sind.
# HINWEIS: Verwenden Sie Strings für dezimale Werte, um Rundungsfehler zu vermeiden.
# z.B.:
@@ -164,7 +164,7 @@ max_Betrag_willig_zu_verlieren_grt = 20
[tap.sender_aggregator_endpoints]
# Key-Value aller Absender und ihrer Aggregator-Endpunkte
-# Das folgende Datenbeispiel gilt für das E&N Testnet-Gateway.
+# Dieser hier ist zum Beispiel für das E&N Testnetz-Gateway.
0xDDE4cfFd3D9052A9cb618fC05a1Cd02be1f2F467 = „https://tap-aggregator.network.thegraph.com“
```
diff --git a/website/src/pages/de/indexing/tooling/graph-node.mdx b/website/src/pages/de/indexing/tooling/graph-node.mdx
index ad1242d7c2b7..3c4cb903b165 100644
--- a/website/src/pages/de/indexing/tooling/graph-node.mdx
+++ b/website/src/pages/de/indexing/tooling/graph-node.mdx
@@ -1,40 +1,40 @@
---
-title: Graph Node
+title: Graph-Knoten
---
-Graph Node ist die Komponente, die Subgrafen indiziert und die resultierenden Daten zur Abfrage über eine GraphQL-API verfügbar macht. Als solches ist es für den Indexer-Stack von zentraler Bedeutung, und der korrekte Betrieb des Graph-Knotens ist entscheidend für den Betrieb eines erfolgreichen Indexers.
+Graph Node ist die Komponente, die Subgraphen indiziert und die daraus resultierenden Daten zur Abfrage über eine GraphQL-API bereitstellt. Als solche ist sie ein zentraler Bestandteil des Indexer-Stacks, und der korrekte Betrieb von Graph Node ist entscheidend für den erfolgreichen Betrieb eines Indexers.
-This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node).
+Dies bietet einen kontextbezogenen Überblick über Graph Node und einige der erweiterten Optionen, die Indexern zur Verfügung stehen. Ausführliche Dokumentation und Anleitungen finden Sie im [Graph Node repository](https://github.com/graphprotocol/graph-node).
-## Graph Node
+## Graph-Knoten
-[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query.
+[Graph Node] (https://github.com/graphprotocol/graph-node) ist die Referenzimplementierung für die Indizierung von Subgraphen auf The Graph Network, die Verbindung zu Blockchain-Clients, die Indizierung von Subgraphen und die Bereitstellung indizierter Daten für Abfragen.
-Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node).
+Graph Node (und der gesamte Indexer-Stack) kann sowohl auf Bare Metal als auch in einer Cloud-Umgebung betrieben werden. Diese Flexibilität der zentralen Indexer-Komponente ist entscheidend für die Robustheit von The Graph Protocol. Ebenso kann Graph Node [aus dem Quellcode gebaut] werden (https://github.com/graphprotocol/graph-node), oder Indexer können eines der [bereitgestellten Docker Images] verwenden (https://hub.docker.com/r/graphprotocol/graph-node).
### PostgreSQL-Datenbank
-Der Hauptspeicher für den Graph-Knoten, hier werden Subgraf-Daten sowie Metadaten zu Subgrafen und Subgraf-unabhängige Netzwerkdaten wie Block-Cache und eth_call-Cache gespeichert.
+Der Hauptspeicher für den Graph Node. Hier werden die Subgraph-Daten, Metadaten über Subgraphs und Subgraph-agnostische Netzwerkdaten wie der Block-Cache und der eth_call-Cache gespeichert.
### Netzwerk-Clients
-In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple.
+Um ein Netzwerk zu indizieren, benötigt Graph Node Zugriff auf einen Netzwerk-Client über einen EVM-kompatiblen JSON-RPC API. Dieser RPC kann sich mit einem einzelnen Client verbinden oder es könnte sich um ein komplexeres Setup handeln, das die Last auf mehrere verteilt.
-While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)).
+Während einige Subgraphen nur einen vollständigen Knoten benötigen, können einige Indizierungsfunktionen haben, die zusätzliche RPC-Funktionalität erfordern. Insbesondere Subgraphen, die `eth_calls` als Teil der Indizierung machen, benötigen einen Archivknoten, der [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898) unterstützt, und Subgraphen mit `callHandlers` oder `blockHandlers` mit einem `call`-Filter benötigen `trace_filter`-Unterstützung ([siehe Trace-Modul-Dokumentation hier](https://openethereum.github.io/JSONRPC-trace-module)).
-**Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/).
+**Network Firehoses** - ein Firehose ist ein gRPC-Dienst, der einen geordneten, aber forkfähigen Strom von Blöcken bereitstellt, der von den Kernentwicklern von The Graph entwickelt wurde, um eine performante Indexierung in großem Umfang zu unterstützen. Dies ist derzeit keine Voraussetzung für Indexer, aber Indexer werden ermutigt, sich mit dieser Technologie vertraut zu machen, bevor die volle Netzwerkunterstützung zur Verfügung steht. Erfahren Sie mehr über den Firehose [hier](https://firehose.streamingfast.io/).
### IPFS-Knoten
-Subgraf-Bereitstellungsmetadaten werden im IPFS-Netzwerk gespeichert. Der Graph-Knoten greift hauptsächlich während der Subgraf-Bereitstellung auf den IPFS-Knoten zu, um das Subgraf-Manifest und alle verknüpften Dateien abzurufen. Netzwerk-Indexierer müssen keinen eigenen IPFS-Knoten hosten, ein IPFS-Knoten für das Netzwerk wird unter https://ipfs.network.thegraph.com gehostet.
+Die Metadaten für den Einsatz von Subgraphen werden im IPFS-Netzwerk gespeichert. Der Graph Node greift während des Einsatzes von Subgraphen primär auf den IPFS-Knoten zu, um das Subgraphen-Manifest und alle verknüpften Dateien abzurufen. Netzwerkindizierer müssen keinen eigenen IPFS-Knoten hosten. Ein IPFS-Knoten für das Netzwerk wird auf https://ipfs.network.thegraph.com gehostet.
### Prometheus-Metrikserver
Um Überwachung und Berichterstellung zu ermöglichen, kann Graph Node optional Metriken auf einem Prometheus-Metrikserver protokollieren.
-### Getting started from source
+### Einstieg in den Sourcecode
-#### Install prerequisites
+#### Installieren Sie die Voraussetzungen
- **Rust**
@@ -42,15 +42,15 @@ Um Überwachung und Berichterstellung zu ermöglichen, kann Graph Node optional
- **IPFS**
-- **Additional Requirements for Ubuntu users** - To run a Graph Node on Ubuntu a few additional packages may be needed.
+- **Zusätzliche Anforderungen für Ubuntu-Benutzer** - Um einen Graph Node unter Ubuntu zu betreiben, sind möglicherweise einige zusätzliche Pakete erforderlich.
```sh
-sudo apt-get install -y clang libpq-dev libssl-dev pkg-config
+sudo apt-get install -y clang libpg-dev libssl-dev pkg-config
```
-#### Setup
+#### Konfiguration
-1. Start a PostgreSQL database server
+1. Starten Sie einen PostgreSQL-Datenbankserver
```sh
initdb -D .postgres
@@ -58,9 +58,9 @@ pg_ctl -D .postgres -l logfile start
createdb graph-node
```
-2. Clone [Graph Node](https://github.com/graphprotocol/graph-node) repo and build the source by running `cargo build`
+2. Klonen Sie das [Graph-Knoten](https://github.com/graphprotocol/graph-node)-Repo und erstellen Sie den Sourcecode durch Ausführen von `cargo build`
-3. Now that all the dependencies are setup, start the Graph Node:
+3. Nachdem alle Abhängigkeiten eingerichtet sind, starten Sie den Graph Node:
```sh
cargo run -p graph-node --release -- \
@@ -71,35 +71,35 @@ cargo run -p graph-node --release -- \
### Erste Schritte mit Kubernetes
-A complete Kubernetes example configuration can be found in the [indexer repository](https://github.com/graphprotocol/indexer/tree/main/k8s).
+Eine vollständige Datenbeispiel-Konfiguration für Kubernetes ist im [indexer repository](https://github.com/graphprotocol/indexer/tree/main/k8s) zu finden.
### Ports
Wenn es ausgeführt wird, stellt Graph Node die folgenden Ports zur Verfügung:
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| Port | Verwendungszweck | Routen | CLI-Argument | Umgebungsvariable |
| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server
(for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS
(for subgraph subscriptions) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC
(for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+| 8000 | GraphQL HTTP Server
(für Subgraph-Abfragen) | /subgraphs/id/...
/subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS
(für Subgraphen-Abonnements) | /subgraphs/id/...
/subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC
(zum Verwalten von Deployments) | / | \--admin-port | - |
+| 8030 | Status der Indizierung von Subgraphen API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus-Metriken | /metrics | \--metrics-port | - |
-> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint.
+> **Wichtig**: Seien Sie vorsichtig damit, Ports öffentlich zugänglich zu machen - **Verwaltungsports** sollten unter Verschluss gehalten werden. Dies gilt auch für den Graph Node JSON-RPC Endpunkt.
## Erweiterte Graph-Knoten-Konfiguration
-In seiner einfachsten Form kann Graph Node mit einer einzelnen Instanz von Graph Node, einer einzelnen PostgreSQL-Datenbank, einem IPFS-Knoten und den Netzwerk-Clients betrieben werden, die von den zu indizierenden Subgrafen benötigt werden.
+In seiner einfachsten Form kann Graph Node mit einer einzelnen Instanz von Graph Node, einer einzelnen PostgreSQL-Datenbank, einem IPFS-Knoten und den Netzwerk-Clients betrieben werden, die für die zu indizierenden Subgraphen erforderlich sind.
-This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables.
+Dieses Setup kann horizontal skaliert werden, indem mehrere Graph Nodes und mehrere Datenbanken zur Unterstützung dieser Graph Nodes hinzugefügt werden. Fortgeschrittene Benutzer möchten vielleicht einige der horizontalen Skalierungsmöglichkeiten von Graph Node sowie einige der erweiterten Konfigurationsoptionen über die Datei „config.toml“ und die Umgebungsvariablen von Graph Node nutzen.
### `config.toml`
-A [TOML](https://toml.io/en/) configuration file can be used to set more complex configurations than those exposed in the CLI. The location of the file is passed with the --config command line switch.
+Eine [TOML](https://toml.io/en/)-Konfigurationsdatei kann verwendet werden, um komplexere Konfigurationen als die in der Befehlszeile angezeigten festzulegen. Der Speicherort der Datei wird mit dem Befehlszeilenschalter --config übergeben.
> Bei Verwendung einer Konfigurationsdatei ist es nicht möglich, die Optionen --postgres-url, --postgres-secondary-hosts und --postgres-host-weights zu verwenden.
-A minimal `config.toml` file can be provided; the following file is equivalent to using the --postgres-url command line option:
+Eine minimale `config.toml`-Datei kann angegeben werden; die folgende Datei entspricht der Verwendung der Befehlszeilenoption --postgres-url:
```toml
[store]
@@ -110,47 +110,47 @@ connection="<.. postgres-url argument ..>"
indexers = [ "<.. list of all indexing nodes ..>" ]
```
-Full documentation of `config.toml` can be found in the [Graph Node docs](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md).
+Eine vollständige Dokumentation von `config.toml` findet sich in den [Graph Node docs](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md).
#### Mehrere Graph-Knoten
-Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules).
+Die Indizierung von Graph Node kann horizontal skaliert werden, indem mehrere Instanzen von Graph Node ausgeführt werden, um die Indizierung und Abfrage auf verschiedene Knoten aufzuteilen. Dies kann einfach durch die Ausführung von Graph Nodes erfolgen, die beim Start mit einer anderen `node_id` konfiguriert werden (z. B. in der Docker Compose-Datei). Diese kann dann in der Datei `config.toml` verwendet werden, um [dedizierte Abfrageknoten](#dedicated-query-nodes), [Block-Ingestoren](#dedicated-block-ingestion) und die Aufteilung von Subgraphen über Knoten mit [Einsatzregeln](#deployment-rules) zu spezifizieren.
> Beachten Sie darauf, dass mehrere Graph-Knoten so konfiguriert werden können, dass sie dieselbe Datenbank verwenden, die ihrerseits durch Sharding horizontal skaliert werden kann.
#### Bereitstellungsregeln
-Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision.
+Bei mehreren Graph-Knoten ist es notwendig, den Einsatz von neuen Subgraphen zu verwalten, damit derselbe Subgraph nicht von zwei verschiedenen Knoten indiziert wird, was zu Kollisionen führen würde. Dies kann durch die Verwendung von Einsatzregeln geschehen, die auch angeben können, in welchem „Shard“ die Daten eines Subgraphen gespeichert werden sollen, wenn ein Datenbank-Sharding verwendet wird. Einsatzregeln können den Namen des Subgraphen und das Netzwerk, das der Einsatz indiziert, abgleichen, um eine Entscheidung zu treffen.
-Beispielkonfiguration für Bereitstellungsregeln:
+Example deployment rule configuration:
```toml
[deployment]
[[deployment.rule]]
-match = { name = "(vip|important)/.*" }
-shard = "vip"
-indexers = [ "index_node_vip_0", "index_node_vip_1" ]
+match = { name = „(vip|important)/.*“ }
+shard = „vip“
+indexers = [ „index_node_vip_0“, „index_node_vip_1“ ]
[[deployment.rule]]
-match = { network = "kovan" }
-# No shard, so we use the default shard called 'primary'
-indexers = [ "index_node_kovan_0" ]
+match = { network = „kovan“ }
+# Kein Shard, also verwenden wir den Standard-Shard namens 'primary'
+indexers = [ „index_node_kovan_0“ ]
[[deployment.rule]]
-match = { network = [ "xdai", "poa-core" ] }
-indexers = [ "index_node_other_0" ]
+match = { network = [ „xdai“, „poa-core“ ] }
+indexers = [ „index_node_other_0“ ]
[[deployment.rule]]
-# There's no 'match', so any subgraph matches
-shards = [ "sharda", "shardb" ]
+# Es gibt kein 'match', also passt jeder Subgraph
+shards = [ „sharda“, „shardb“ ]
indexers = [
- "index_node_community_0",
- "index_node_community_1",
- "index_node_community_2",
- "index_node_community_3",
- "index_node_community_4",
- "index_node_community_5"
+ „index_node_community_0“,
+ „index_node_community_1“, [ ‚index_node_community_1‘,
+ „index_node_community_2“,
+ „index_node_community_3“,
+ „index_node_community_4“,
+ „index_node_community_5“
]
```
-Read more about deployment rules [here](https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment).
+Lesen Sie mehr über die Einsatzregeln [hier] (https://github.com/graphprotocol/graph-node/blob/master/docs/config.md#controlling-deployment).
#### Dedizierte Abfrageknoten
@@ -167,11 +167,11 @@ Jeder Knoten, dessen --node-id mit dem regulären Ausdruck übereinstimmt, wird
Für die meisten Anwendungsfälle reicht eine einzelne Postgres-Datenbank aus, um eine Graph-Node-Instanz zu unterstützen. Wenn eine Graph-Node-Instanz aus einer einzelnen Postgres-Datenbank herauswächst, ist es möglich, die Speicherung der Daten des Graph-Nodes auf mehrere Postgres-Datenbanken aufzuteilen. Alle Datenbanken zusammen bilden den Speicher der Graph-Node-Instanz. Jede einzelne Datenbank wird als Shard bezeichnet.
-Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed.
+Shards können verwendet werden, um Subgraph-Einsätze auf mehrere Datenbanken aufzuteilen, und sie können auch verwendet werden, um Replikate zu verwenden, um die Abfragelast auf die Datenbanken zu verteilen. Dazu gehört auch die Konfiguration der Anzahl der verfügbaren Datenbankverbindungen, die jeder „Graph-Knoten“ in seinem Verbindungspool für jede Datenbank vorhalten soll, was zunehmend wichtiger wird, je mehr Subgraphen indiziert werden.
Sharding wird nützlich, wenn Ihre vorhandene Datenbank nicht mit der Last Schritt halten kann, die Graph Node ihr auferlegt, und wenn es nicht mehr möglich ist, die Datenbankgröße zu erhöhen.
-> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs.
+> Im Allgemeinen ist es besser, eine einzelne Datenbank so groß wie möglich zu machen, bevor man mit Shards beginnt. Eine Ausnahme ist, wenn der Abfrageverkehr sehr ungleichmäßig auf die Subgraphen verteilt ist; in solchen Situationen kann es sehr hilfreich sein, wenn die hochvolumigen Subgraphen in einem Shard und alles andere in einem anderen aufbewahrt wird, weil es dann wahrscheinlicher ist, dass die Daten für die hochvolumigen Subgraphen im db-internen Cache verbleiben und nicht durch Daten ersetzt werden, die von den niedrigvolumigen Subgraphen nicht so häufig benötigt werden.
Was das Konfigurieren von Verbindungen betrifft, beginnen Sie mit max_connections in postgresql.conf, das auf 400 (oder vielleicht sogar 200) eingestellt ist, und sehen Sie sich die Prometheus-Metriken store_connection_wait_time_ms und store_connection_checkout_count an. Spürbare Wartezeiten (alles über 5 ms) sind ein Hinweis darauf, dass zu wenige Verbindungen verfügbar sind; hohe Wartezeiten werden auch dadurch verursacht, dass die Datenbank sehr ausgelastet ist (z. B. hohe CPU-Last). Wenn die Datenbank jedoch ansonsten stabil erscheint, weisen hohe Wartezeiten darauf hin, dass die Anzahl der Verbindungen erhöht werden muss. In der Konfiguration ist die Anzahl der Verbindungen, die jede Graph-Knoten-Instanz verwenden kann, eine Obergrenze, und der Graph-Knoten hält Verbindungen nicht offen, wenn er sie nicht benötigt.
@@ -188,7 +188,7 @@ ingestor = "block_ingestor_node"
#### Unterstützung mehrerer Netzwerke
-Das Graph-Protokoll erhöht die Anzahl der Netzwerke, die für die Indizierung von Belohnungen unterstützt werden, und es gibt viele Subgraphen, die nicht unterstützte Netzwerke indizieren, die ein Indexer verarbeiten möchte. Die Datei `config.toml` ermöglicht eine ausdrucksstarke und flexible Konfiguration von:
+Das Graph Protocol erhöht die Anzahl der Netzwerke, die für die Indizierung von Belohnungen unterstützt werden, und es gibt viele Subgraphen, die nicht unterstützte Netzwerke indizieren, die ein Indexer gerne verarbeiten würde. Die Datei `config.toml` ermöglicht eine ausdrucksstarke und flexible Konfiguration von:
- Mehrere Netzwerke
- Mehrere Anbieter pro Netzwerk (dies kann eine Aufteilung der Last auf Anbieter ermöglichen und kann auch die Konfiguration von vollständigen Knoten sowie Archivknoten ermöglichen, wobei Graph Node günstigere Anbieter bevorzugt, wenn eine bestimmte Arbeitslast dies zulässt).
@@ -223,13 +223,13 @@ Benutzer, die ein skaliertes Indizierungs-Setup mit erweiterter Konfiguration be
- Das Indexer-Repository hat eine [Beispiel-Kubernetes-Referenz](https://github.com/graphprotocol/indexer/tree/main/k8s)
- [Launchpad] (https://docs.graphops.xyz/launchpad/intro) ist ein Toolkit für den Betrieb eines Graph Protocol Indexer auf Kubernetes, das von GraphOps gepflegt wird. Es bietet eine Reihe von Helm-Diagrammen und eine CLI zur Verwaltung eines Graph Node- Deployments.
-### Managing Graph Node
+### Verwaltung von Graph Knoten
-Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs.
+Bei einem laufenden Graph Node (oder Graph Nodes!) besteht die Herausforderung darin, die eingesetzten Subgraphen über diese Nodes hinweg zu verwalten. Graph Node bietet eine Reihe von Tools, die bei der Verwaltung von Subgraphen helfen.
#### Protokollierung
-Die Protokolle von Graph Node können nützliche Informationen für die Debuggen und Optimierung von Graph Node und bestimmten Subgraphen liefern. Graph Node unterstützt verschiedene Log-Ebenen über die Umgebungsvariable `GRAPH_LOG`, mit den folgenden Ebenen: Fehler, Warnung, Info, Debug oder Trace.
+Die Protokolle von Graph Node können nützliche Informationen zur Fehlersuche und Optimierung von Graph Node und bestimmten Untergraphen liefern. Graph Node unterstützt verschiedene Log-Ebenen über die Umgebungsvariable `GRAPH_LOG`, mit den folgenden Ebenen: error, warn, info, debug oder trace.
Wenn Sie außerdem `GRAPH_LOG_QUERY_TIMING` auf `gql` setzen, erhalten Sie mehr Details darüber, wie GraphQL-Abfragen ausgeführt werden (allerdings wird dadurch eine große Menge an Protokollen erzeugt).
@@ -247,86 +247,86 @@ Der Befehl graphman ist in den offiziellen Containern enthalten, und Sie können
Eine vollständige Dokumentation der `graphman`-Befehle ist im Graph Node Repository verfügbar. Siehe [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) im Graph Node `/docs`
-### Working with subgraphs
+### Arbeiten mit Subgraphen
#### Indizierungsstatus-API
-Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more.
+Die API für den Indizierungsstatus ist standardmäßig an Port 8030/graphql verfügbar und bietet eine Reihe von Methoden zur Überprüfung des Indizierungsstatus für verschiedene Subgraphen, zur Überprüfung von Indizierungsnachweisen, zur Inspektion von Subgraphen-Features und mehr.
Das vollständige Schema ist [hier] verfügbar (https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql).
-#### Indexing performance
+#### Indizierungsleistung
-There are three separate parts of the indexing process:
+Es gibt drei separate Teile des Indizierungsprozesses:
-- Fetching events of interest from the provider
-- Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store)
-- Writing the resulting data to the store
+- Abrufen von interessanten Ereignissen vom Anbieter
+- Verarbeiten von Ereignissen in der Reihenfolge mit den entsprechenden Handlern (dies kann das Aufrufen der Kette für den Zustand und das Abrufen von Daten aus dem Speicher beinhalten)
+- Schreiben der Ergebnisdaten in den Speicher
-These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph.
+Diese Phasen sind in einer Pipeline angeordnet (d.h. sie können parallel ausgeführt werden), aber sie sind voneinander abhängig. Wenn die Indizierung von Subgraphen langsam ist, hängt die Ursache dafür von dem jeweiligen Subgraphen ab.
-Common causes of indexing slowness:
+Häufige Ursachen für eine langsame Indizierung:
- Zeit, die benötigt wird, um relevante Ereignisse aus der Kette zu finden (insbesondere Call-Handler können langsam sein, da sie auf `trace_filter` angewiesen sind)
- Durchführen einer großen Anzahl von „eth_calls“ als Teil von Handlern
-- A large amount of store interaction during execution
-- A large amount of data to save to the store
-- A large number of events to process
-- Slow database connection time, for crowded nodes
-- The provider itself falling behind the chain head
-- Slowness in fetching new receipts at the chain head from the provider
+- Eine große Anzahl von Store-Interaktionen während der Ausführung
+- Eine große Datenmenge, die im Speicher gespeichert werden soll
+- Eine große Anzahl von Ereignissen, die verarbeitet werden müssen
+- Lange Datenbankverbindungszeit für überfüllte Knoten
+- Der Anbieter selbst fällt dem Kettenkopf hinterher
+- Langsamkeit beim Abrufen neuer Einnahmen am Kettenkopf vom Anbieter
-Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance.
+Metriken zur Indizierung von Subgraphen können dabei helfen, die Ursache für die Langsamkeit der Indizierung zu ermitteln. In einigen Fällen liegt das Problem am Subgraph selbst, in anderen Fällen können verbesserte Netzwerkanbieter, geringere Datenbankkonflikte und andere Konfigurationsverbesserungen die Indizierungsleistung deutlich verbessern.
-#### Failed subgraphs
+#### Fehlerhafte Subgraphen
-During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure:
+Während der Indizierung können Subgraphen fehlschlagen, wenn sie auf unerwartete Daten stoßen, wenn eine Komponente nicht wie erwartet funktioniert oder wenn es einen Fehler in den Event-Handlern oder der Konfiguration gibt. Es gibt zwei allgemeine Arten von Fehlern:
-- Deterministic failures: these are failures which will not be resolved with retries
-- Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time.
+- Deterministische Fehler: Dies sind Fehler, die nicht durch Wiederholungsversuche behoben werden können
+- Nicht deterministische Fehler: Diese können auf Probleme mit dem Anbieter oder auf einen unerwarteten Graph-Knoten-Fehler zurückzuführen sein. Wenn ein nicht deterministischer Fehler auftritt, versucht Graph Node die fehlgeschlagenen Handler erneut und nimmt im Laufe der Zeit einen Rückzieher.
-In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required.
+In einigen Fällen kann ein Fehler durch den Indexer behoben werden (z. B. wenn der Fehler darauf zurückzuführen ist, dass nicht die richtige Art von Anbieter vorhanden ist, kann durch Hinzufügen des erforderlichen Anbieters die Indizierung fortgesetzt werden). In anderen Fällen ist jedoch eine Änderung des Subgraph-Codes erforderlich.
-> Deterministische Fehler werden als „endgültig“ betrachtet, wobei für den fehlgeschlagenen Block ein Indizierungsnachweis generiert wird, während nicht-deterministische Fehler nicht als solche betrachtet werden, da es dem Subgraph gelingen kann, „auszufallen“ und die Indizierung fortzusetzen. In einigen Fällen ist das nicht-deterministische Label falsch und der Subgraph wird den Fehler nie überwinden; solche Fehler sollten als Probleme im Graph Node Repository gemeldet werden.
+> Deterministische Fehler werden als „endgültig“ betrachtet, wobei für den fehlgeschlagenen Block ein Indizierungsnachweis generiert wird, während nicht-deterministische Fehler nicht als solche betrachtet werden, da es dem Subgraphen gelingen kann, „nicht zu versagen“ und die Indizierung fortzusetzen. In einigen Fällen ist die nicht-deterministische Kennzeichnung falsch, und der Subgraph wird den Fehler nie überwinden; solche Fehler sollten als Probleme im Graph Node Repository gemeldet werden.
-#### Block and call cache
+#### Cache blockieren und aufrufen
-Graph Node speichert bestimmte Daten im Zwischenspeicher, um ein erneutes Abrufen vom Anbieter zu vermeiden. Blöcke werden zwischengespeichert, ebenso wie die Ergebnisse von `eth_calls` (letztere werden ab einem bestimmten Block zwischengespeichert). Diese Zwischenspeicherung kann die Indizierungsgeschwindigkeit bei der „Neusynchronisierung“ eines geringfügig veränderten Subgraphen drastisch erhöhen.
+Graph Node speichert bestimmte Daten im Zwischenspeicher, um ein erneutes Abrufen vom Anbieter zu vermeiden. Blöcke werden zwischengespeichert, ebenso wie die Ergebnisse von `eth_calls` (letztere werden ab einem bestimmten Block zwischengespeichert). Diese Zwischenspeicherung kann die Indizierungsgeschwindigkeit bei der „Neusynchronisierung“ eines leicht geänderten Untergraphen drastisch erhöhen.
-Wenn jedoch ein Ethereum-Knoten über einen bestimmten Zeitraum falsche Daten geliefert hat, können diese in den Cache gelangen und zu falschen Daten oder fehlgeschlagenen Subgraphen führen. In diesem Fall können Indexer `graphman` verwenden, um den vergifteten Cache zu löschen und dann die betroffenen Subgraphen zurückzuspulen, die dann frische Daten von dem (hoffentlich) gesunden Anbieter abrufen.
+Wenn jedoch ein Ethereum-Knoten über einen bestimmten Zeitraum falsche Daten geliefert hat, können diese in den Cache gelangen und zu falschen Daten oder fehlgeschlagenen Subgraphen führen. In diesem Fall können Indexer `graphman` verwenden, um den vergifteten Cache zu löschen und dann die betroffenen Subgraphen neu zu spulen, die dann frische Daten von dem (hoffentlich) gesunden Anbieter holen werden.
-If a block cache inconsistency is suspected, such as a tx receipt missing event:
+Wenn eine Block-Cache-Inkonsistenz vermutet wird, z. B. ein Ereignis „TX-Empfang fehlt“:
1. `graphman chain list`, um den Namen der Kette zu finden.
2. `graphman chain check-blocks by-number ` prüft, ob der zwischengespeicherte Block mit dem Anbieter übereinstimmt, und löscht den Block aus dem Cache, wenn dies nicht der Fall ist.
1. Wenn es einen Unterschied gibt, kann es sicherer sein, den gesamten Cache mit `graphman chain truncate ` abzuschneiden.
- 2. If the block matches the provider, then the issue can be debugged directly against the provider.
+ 2. Wenn der Block mit dem Anbieter übereinstimmt, kann das Problem direkt beim Anbieter gedebuggt werden.
-#### Querying issues and errors
+#### Abfragen von Problemen und Fehlern
-Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process.
+Sobald ein Subgraph indiziert wurde, können Indexierer erwarten, dass Abfragen über den dedizierten Abfrageendpunkt des Subgraphen bedient werden. Wenn der Indexer hofft, ein erhebliches Abfragevolumen zu bedienen, wird ein dedizierter Abfrageknoten empfohlen. Im Falle eines sehr hohen Abfragevolumens möchten Indexer möglicherweise Replikatshards konfigurieren, damit Abfragen den Indexierungsprozess nicht beeinträchtigen.
-However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users.
+Aber selbst mit einem dedizierten Abfrageknoten und Replikaten kann die Ausführung bestimmter Abfragen lange dauern und in einigen Fällen die Speichernutzung erhöhen und die Abfragezeit für andere Benutzer negativ beeinflussen.
-There is not one "silver bullet", but a range of tools for preventing, diagnosing and dealing with slow queries.
+Es gibt nicht die eine Wunderwaffe, sondern eine Reihe von Tools zur Vorbeugung, Diagnose und Behandlung langsamer Abfragen.
-##### Query caching
+##### Abfrage-Caching
Graph Node zwischenspeichert GraphQL-Abfragen standardmäßig, was die Datenbanklast erheblich reduzieren kann. Dies kann mit den Einstellungen `GRAPH_QUERY_CACHE_BLOCKS` und `GRAPH_QUERY_CACHE_MAX_MEM` weiter konfiguriert werden - lesen Sie mehr [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/environment-variables.md#graphql-caching).
-##### Analysing queries
+##### Analysieren von Abfragen
-Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible.
+Problematische Abfragen treten meist auf zwei Arten auf. In einigen Fällen melden die Benutzer selbst, dass eine bestimmte Abfrage langsam ist. In diesem Fall besteht die Herausforderung darin, den Grund für die Langsamkeit zu diagnostizieren - ob es sich um ein allgemeines Problem oder um ein spezifisches Problem für diesen Untergraphen oder diese Abfrage handelt. Und dann natürlich, wenn möglich, das Problem zu beheben.
-In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue.
+In anderen Fällen kann der Auslöser eine hohe Speicherauslastung auf einem Abfrageknoten sein. In diesem Fall besteht die Herausforderung darin, zuerst die Abfrage zu identifizieren, die das Problem verursacht.
Indexer können [qlog](https://github.com/graphprotocol/qlog/) verwenden, um die Abfrageprotokolle von Graph Node zu verarbeiten und zusammenzufassen. `GRAPH_LOG_QUERY_TIMING` kann auch aktiviert werden, um langsame Abfragen zu identifizieren und zu debuggen.
-Given a slow query, indexers have a few options. Of course they can alter their cost model, to significantly increase the cost of sending the problematic query. This may result in a reduction in the frequency of that query. However this often doesn't resolve the root cause of the issue.
+Bei einer langsamen Abfrage haben Indexierer einige Optionen. Natürlich können sie ihr Kostenmodell ändern, um die Kosten für das Senden der problematischen Anfrage erheblich zu erhöhen. Dies kann zu einer Verringerung der Häufigkeit dieser Abfrage führen. Dies behebt jedoch häufig nicht die Ursache des Problems.
-##### Account-like optimisation
+##### Account-ähnliche Optimierung
-Database tables that store entities seem to generally come in two varieties: 'transaction-like', where entities, once created, are never updated, i.e., they store something akin to a list of financial transactions, and 'account-like' where entities are updated very often, i.e., they store something like financial accounts that get modified every time a transaction is recorded. Account-like tables are characterized by the fact that they contain a large number of entity versions, but relatively few distinct entities. Often, in such tables the number of distinct entities is 1% of the total number of rows (entity versions)
+Datenbanktabellen, die Entitäten speichern, scheinen im Allgemeinen in zwei Varianten zu existieren: „transaktionsähnlich“, bei denen Entitäten, sobald sie erstellt wurden, nie aktualisiert werden, d. h. sie speichern so etwas wie eine Liste von Finanztransaktionen, und „kontoähnlich“, bei denen Entitäten sehr oft aktualisiert werden, d. h. sie speichern so etwas wie Finanzkonten, die jedes Mal geändert werden, wenn eine Transaktion aufgezeichnet wird. Kontenähnliche Tabellen zeichnen sich dadurch aus, dass sie eine große Anzahl von Entitätsversionen, aber relativ wenige eindeutige Entitäten enthalten. In solchen Tabellen beträgt die Anzahl der unterschiedlichen Entitäten häufig 1 % der Gesamtzahl der Zeilen (Entitätsversionen).
Für kontoähnliche Tabellen kann `graph-node` Abfragen generieren, die sich die Details zunutze machen, wie Postgres Daten mit einer so hohen Änderungsrate speichert, nämlich dass alle Versionen für die jüngsten Blöcke in einem kleinen Teil des Gesamtspeichers für eine solche Tabelle liegen.
@@ -336,10 +336,10 @@ Im Allgemeinen sind Tabellen, bei denen die Anzahl der unterschiedlichen Entitä
Sobald eine Tabelle als „kontoähnlich“ eingestuft wurde, wird durch die Ausführung von `graphman stats account-like .` die kontoähnliche Optimierung für Abfragen auf diese Tabelle aktiviert. Die Optimierung kann mit `graphman stats account-like --clear .` wieder ausgeschaltet werden. Es dauert bis zu 5 Minuten, bis die Abfrageknoten merken, dass die Optimierung ein- oder ausgeschaltet wurde. Nach dem Einschalten der Optimierung muss überprüft werden, ob die Abfragen für diese Tabelle durch die Änderung nicht tatsächlich langsamer werden. Wenn Sie Grafana für die Überwachung von Postgres konfiguriert haben, würden langsame Abfragen in `pg_stat_activity` in großer Zahl angezeigt werden und mehrere Sekunden dauern. In diesem Fall muss die Optimierung wieder abgeschaltet werden.
-Bei Uniswap-ähnlichen Subgraphen sind die `pair`- und `token`-Tabellen die Hauptkandidaten für diese Optimierung und können die Datenbankauslastung erheblich beeinflussen.
+For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load.
-#### Removing subgraphs
+#### Entfernen von Subgraphen
> This is new functionality, which will be available in Graph Node 0.29.x
-Irgendwann möchte ein Indexer vielleicht einen bestimmten Subgraph entfernen. Das kann einfach mit `graphman drop` gemacht werden, das einen Einsatz und alle indizierten Daten löscht. Der Einsatz kann entweder als Subgraph-Name, als IPFS-Hash `Qm..` oder als Datenbank-Namensraum `sgdNNN` angegeben werden. Weitere Dokumentation ist [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop) verfügbar.
+Irgendwann möchte ein Indexer vielleicht einen bestimmten Subgraph entfernen. Dies kann einfach mit `graphman drop` gemacht werden, welches einen Einsatz und alle indizierten Daten löscht. Der Einsatz kann entweder als Name eines Subgraphen, als IPFS-Hash `Qm..` oder als Datenbank-Namensraum `sgdNNN` angegeben werden. Weitere Dokumentation ist [hier](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop) verfügbar.
diff --git a/website/src/pages/de/resources/_meta-titles.json b/website/src/pages/de/resources/_meta-titles.json
index f5971e95a8f6..5ef7fded48f6 100644
--- a/website/src/pages/de/resources/_meta-titles.json
+++ b/website/src/pages/de/resources/_meta-titles.json
@@ -1,4 +1,4 @@
{
- "roles": "Additional Roles",
- "migration-guides": "Migration Guides"
+ "roles": "Zusätzliche Rollen",
+ "migration-guides": "Leitfäden zur Migration"
}
diff --git a/website/src/pages/de/resources/benefits.mdx b/website/src/pages/de/resources/benefits.mdx
index 24c816c0784e..414897ac5365 100644
--- a/website/src/pages/de/resources/benefits.mdx
+++ b/website/src/pages/de/resources/benefits.mdx
@@ -34,7 +34,7 @@ Die Abfragekosten können variieren; die angegebenen Kosten sind der Durchschnit
| Entwicklungszeit† | $400 pro Monat | Keine, eingebaut in das Netzwerk mit global verteilten Indexern |
| Abfragen pro Monat | Begrenzt auf infrastrukturelle Funktionen | 100.000 (kostenloser Plan) |
| Kosten pro Abfrage | $0 | $0‡ |
-| Infrastructure | Zentralisiert | Dezentralisiert |
+| Infrastruktur | Zentralisiert | Dezentralisiert |
| Geografische Redundanz | $750+ pro zusätzlichem Knoten | Eingeschlossen |
| Betriebszeit | Variiert | 99.9%+ |
| Monatliche Gesamtkosten | $750+ | $0 |
@@ -48,7 +48,7 @@ Die Abfragekosten können variieren; die angegebenen Kosten sind der Durchschnit
| Entwicklungszeit† | $800 pro Monat | Keine, eingebaut in das Netzwerk mit global verteilten Indexern |
| Abfragen pro Monat | Begrenzt auf infrastrukturelle Funktionen | ~3,000,000 |
| Kosten pro Abfrage | $0 | $0.00004 |
-| Infrastructure | Zentralisiert | Dezentralisiert |
+| Infrastruktur | Zentralisiert | Dezentralisiert |
| Engineering-Kosten | $200 pro Stunde | Eingeschlossen |
| Geografische Redundanz | $1,200 Gesamtkosten pro zusätzlichem Knoten | Eingeschlossen |
| Betriebszeit | Variiert | 99.9%+ |
@@ -64,7 +64,7 @@ Die Abfragekosten können variieren; die angegebenen Kosten sind der Durchschnit
| Entwicklungszeit† | $6,000 oder mehr pro Monat | Keine, eingebaut in das Netzwerk mit global verteilten Indexern |
| Abfragen pro Monat | Begrenzt auf infrastrukturelle Funktionen | ~30,000,000 |
| Kosten pro Abfrage | $0 | $0.00004 |
-| Infrastructure | Zentralisiert | Dezentralisiert |
+| Infrastruktur | Zentralisiert | Dezentralisiert |
| Geografische Redundanz | $1,200 Gesamtkosten pro zusätzlichem Knoten | Eingeschlossen |
| Betriebszeit | Variiert | 99.9%+ |
| Monatliche Gesamtkosten | $11,000+ | $1,200 |
@@ -90,4 +90,4 @@ Das dezentralisierte Netzwerk von The Graph bietet den Nutzern Zugang zu einer g
Unterm Strich: Das The Graph Network ist kostengünstiger, einfacher zu benutzen und liefert bessere Ergebnisse als ein lokaler `graph-node`.
-Beginnen Sie noch heute mit der Nutzung von The Graph Network und erfahren Sie, wie Sie [Ihren Subgraphут im dezentralen Netzwerk von The Graph veröffentlichen](/subgraphs/quick-start/).
+Beginnen Sie noch heute mit der Nutzung von The Graph Network und erfahren Sie, wie Sie [Ihren Subgraphen im dezentralen Netzwerk von The Graph veröffentlichen](/subgraphs/quick-start/).
diff --git a/website/src/pages/de/resources/glossary.mdx b/website/src/pages/de/resources/glossary.mdx
index ffcd4bca2eed..921c1f6225ae 100644
--- a/website/src/pages/de/resources/glossary.mdx
+++ b/website/src/pages/de/resources/glossary.mdx
@@ -1,83 +1,83 @@
---
-title: Glossary
+title: Glossar
---
-- **The Graph**: A decentralized protocol for indexing and querying data.
+- **The Graph**: Ein dezentrales Protokoll zur Indizierung und Abfrage von Daten.
-- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer.
+- **Abfrage**: Eine Anfrage nach Daten. Im Fall von The Graph ist eine Abfrage eine Anfrage nach Daten aus einem Subgraphen, die von einem Indexierer beantwortet wird.
-- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs.
+- **GraphQL**: Eine Abfragesprache für APIs und eine Laufzeitumgebung, um diese Abfragen mit Ihren vorhandenen Daten zu erfüllen. The Graph verwendet GraphQL, um Subgraphen abzufragen.
-- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network.
+- **Endpunkt**: Eine URL, die zur Abfrage eines Subgraphen verwendet werden kann. Der Test-Endpunkt für Subgraph Studio ist `https://api.studio.thegraph.com/query///` und der Graph Explorer Endpunkt ist `https://gateway.thegraph.com/api//subgraphs/id/`. Der The Graph Explorer Endpunkt wird verwendet, um Subgraphen im dezentralen Netzwerk von The Graph abzufragen.
-- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone.
+- **Subgraph**: Eine offene API, die Daten aus einer Blockchain extrahiert, verarbeitet und so speichert, dass sie einfach über GraphQL abgefragt werden können. Entwickler können einen Subgraphen erstellen, bereitstellen und auf The Graph Network veröffentlichen. Sobald der Subgraph indiziert ist, kann er von jedem abgefragt werden.
-- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries.
+- **Indexierer**: Netzwerkteilnehmer, die Indexierungsknoten betreiben, um Daten aus Blockchains zu indexieren und GraphQL-Abfragen zu bedienen.
-- **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards.
+- **Einkommensströme für Indexierer**: Indexierer werden in GRT mit zwei Komponenten belohnt: Rabatte auf Abfragegebühren und Rewards für die Indizierung.
- 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network.
+ 1. **Abfragegebühren-Rabatte**: Zahlungen von Subgraph-Konsumenten für die Bedienung von Anfragen im Netz.
- 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually.
+ 2. **Indizierung-Rewards**: Die Rewards, die Indexierer für die Indizierung von Subgraphen erhalten. Indizierung-Rewards werden durch die Neuausgabe von 3% GRT jährlich generiert.
-- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit.
+- **Selbstbeteiligung der Indexierer**: Der Betrag an GRT, den Indexierer einsetzen, um am dezentralen Netzwerk teilzunehmen. Das Minimum beträgt 100.000 GRT, eine Obergrenze gibt es nicht.
-- **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake.
+- **Delegationskapazität**: Die maximale Menge an GRT, die ein Indexierer von Delegatoren annehmen kann. Indexierer können nur bis zum 16-fachen ihres Indexierer-Eigenanteils akzeptieren, und zusätzliche Delegationen führen zu verwässerten Rewards. Ein Datenbeispiel: Wenn ein Indexierer eine Selbsteinnahme von 1 Mio. GRT hat, beträgt seine Delegationskapazität 16 Mio. GRT. Indexierer können jedoch ihre Delegationskapazität erhöhen, indem sie ihre Selbstbeteiligung erhöhen.
-- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers.
+- **Upgrade-Indexierer**: Ein Indexierer, der als Fallback für Subgraph-Abfragen dient, die nicht von anderen Indexierern im Netzwerk bedient werden. Der Upgrade-Indexierer ist nicht konkurrenzfähig mit anderen Indexierern.
-- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs.
+- **Delegator**: Netzwerkteilnehmer, die GRT besitzen und ihre GRT an Indexer delegieren. Dies erlaubt es Indexern, ihre Beteiligung an Subgraphen im Netzwerk zu erhöhen. Im Gegenzug erhalten die Delegierten einen Teil der Indexbelohnungen, die Indexer für die Bearbeitung von Subgraphen erhalten.
-- **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned.
+- **Delegationssteuer**: Eine 0,5%ige Gebühr, die von Delegatoren gezahlt wird, wenn sie GRT an Indexierer delegieren. Die GRT, die zur Zahlung der Gebühr verwendet werden, werden verbrannt.
-- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph.
+- **Kurator**: Netzwerkteilnehmer, die hochwertige Untergraphen identifizieren und GRT auf ihnen im Gegenzug für Kuratierungsfreigaben signalisieren. Wenn Indexer Abfragegebühren für einen Subgraph beanspruchen, werden 10% an die Kuratoren dieses Subgraphen verteilt. Es gibt eine positive Korrelation zwischen der Menge der signalisierten GRT und der Anzahl der Indexer, die einen Subgraph indizieren.
-- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned.
+- **Kuratierungssteuer**: Eine 1% Gebühr, die von Kuratoren bezahlt wird, wenn sie GRT auf Subgraphen signalisieren. Der GRT wird verwendet, um die Gebühr zu bezahlen.
-- **Data Consumer**: Any application or user that queries a subgraph.
+- **Datenverbraucher**: Jede Anwendung oder Benutzer, die einen Subgraph abfragt.
-- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network.
+- **Subgraph Developer**: Ein Entwickler, der einen Subgraph für das dezentralisierte Netzwerk von The Graphen baut und bereitstellt.
-- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example.
+- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example.
-- **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day.
+- **Epoche**: Eine Zeiteinheit innerhalb des Netzes. Derzeit entspricht eine Epoche 6.646 Blöcken oder etwa 1 Tag.
-- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses:
+- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses:
- 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated.
+ 1. **Aktiv**: Eine Zuordnung gilt als aktiv, wenn sie onchain erstellt wird. Dies wird als Öffnen einer Zuordnung bezeichnet und zeigt dem Netzwerk an, dass der Indexierer aktiv indiziert und Abfragen für einen bestimmten Subgraphen bedient. Aktive Zuweisungen sammeln Rewards für die Indizierung, die proportional zum Signal auf dem Subgraphen und der Menge des zugewiesenen GRT sind.
- 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data.
+ 2. **Geschlossen**: Ein Indexierer kann die aufgelaufenen Rewards für einen bestimmten Subgraphen beanspruchen, indem er einen aktuellen und gültigen Proof of Indexing (POI) einreicht. Dies wird als Schließen einer Zuordnung bezeichnet. Eine Zuordnung muss mindestens eine Epoche lang offen gewesen sein, bevor sie geschlossen werden kann. Die maximale Zuordnungsdauer beträgt 28 Epochen. Lässt ein Indexierer eine Zuordnung länger als 28 Epochen offen, wird sie als veraltete Zuordnung bezeichnet. Wenn sich eine Zuordnung im Zustand **Geschlossen** befindet, kann ein Fisher immer noch einen Disput eröffnen, um einen Indexierer wegen der Bereitstellung falscher Daten anzufechten.
-- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs.
+- **Subgraph Studio**: Ein mächtiger dApp zum Erstellen, Bereitstellen und Publizieren von Subgraphen.
-- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide.
+- **Fischer**: Eine Rolle innerhalb des The Graph Network, die von Teilnehmern eingenommen wird, die die Genauigkeit und Integrität der von Indexierern gelieferten Daten überwachen. Wenn ein Fisher eine Abfrage-Antwort oder einen POI identifiziert, den er für falsch hält, kann er einen Disput gegen den Indexierer einleiten. Wenn der Streitfall zu Gunsten des Fischers entschieden wird, verliert der Indexierer 2,5 % seines Eigenanteils. Von diesem Betrag erhält der Fischer 50 % als Belohnung für seine Wachsamkeit, und die restlichen 50 % werden aus dem Verkehr gezogen (verbrannt). Dieser Mechanismus soll die Fischer dazu ermutigen, die Zuverlässigkeit des Netzwerks aufrechtzuerhalten, indem sichergestellt wird, dass die Indexierer für die von ihnen gelieferten Daten verantwortlich gemacht werden.
-- **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network.
+- **Schlichter**: Schlichter sind Netzwerkteilnehmer, die im Rahmen eines Governance-Prozesses ernannt werden. Die Rolle des Schlichters besteht darin, über den Ausgang von Streitigkeiten bei Indizierungen und Abfragen zu entscheiden. Ihr Ziel ist es, den Nutzen und die Zuverlässigkeit von The Graph Network zu maximieren.
-- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned.
+- **Slashing**: Indexierer können für die Bereitstellung eines falschen POI oder für die Bereitstellung ungenauer Daten um ihre selbst gesetzten GRT gekürzt werden. Der Prozentsatz des Slashings ist ein Protokollparameter, der derzeit auf 2,5% des Eigenanteils eines Indexierers festgelegt ist. 50 % der gekürzten GRT gehen an den Fischer, der die ungenauen Daten oder den falschen POI bestritten hat. Die anderen 50% werden verbrannt.
-- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT.
+- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT.
-- **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT.
+- **Delegation Rewards**: Die Rewards, die Delegatoren für die Delegierung von GRT an Indexierer erhalten. Delegations-Rewards werden in GRT verteilt.
-- **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network.
+- **GRT**: Der Utility-Token von The Graph. GRT bietet den Netzwerkteilnehmern wirtschaftliche Anreize für ihren Beitrag zum Netzwerk.
-- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer.
+- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer.
-- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer.
+- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer.
-- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations.
+- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations.
-- **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way.
+- **The Graph Client**: Eine Bibliothek für den Aufbau von GraphQL-basierten Dapps auf dezentralisierte Weise.
-- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol.
+- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol.
-- **Graph CLI**: A command line interface tool for building and deploying to The Graph.
+- **Graph CLI**: Ein Command-Line-Interface-Tool (CLI) zum Erstellen und Bereitstellen von The Graph.
-- **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again.
+- **Abkühlphase**: Die Zeit, die verbleibt, bis ein Indexierer, der seine Delegationsparameter geändert hat, dies wieder tun kann.
-- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake.
+- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake.
-- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings.
+- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings.
-- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2).
+- **Migrieren**: Der Prozess, bei dem Kurationsanteile von einer alten Version eines Subgraphen auf eine neue Version eines Subgraphen übertragen werden (z. B. wenn v0.0.1 auf v0.0.2 aktualisiert wird).
diff --git a/website/src/pages/de/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/de/resources/migration-guides/assemblyscript-migration-guide.mdx
index d5ffa00d0e1f..0508b5db3baf 100644
--- a/website/src/pages/de/resources/migration-guides/assemblyscript-migration-guide.mdx
+++ b/website/src/pages/de/resources/migration-guides/assemblyscript-migration-guide.mdx
@@ -1,18 +1,18 @@
---
-title: AssemblyScript Migration Guide
+title: AssemblyScript-Migrationsleitfaden
---
Bis jetzt haben Subgraphen eine der [ersten Versionen von AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6) verwendet. Endlich haben wir Unterstützung für die [neueste verfügbare Version](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10) hinzugefügt! 🎉
-That will enable subgraph developers to use newer features of the AS language and standard library.
+Dies ermöglicht es den Entwicklern von Subgrafen, neuere Funktionen der AS-Sprache und der Standardbibliothek zu nutzen.
Diese Anleitung gilt für alle, die `graph-cli`/`graph-ts` unter Version `0.22.0` verwenden. Wenn Sie bereits eine höhere (oder gleiche) Version als diese haben, haben Sie bereits Version `0.19.10` von AssemblyScript verwendet 🙂
> Anmerkung: Ab `0.24.0` kann `graph-node` beide Versionen unterstützen, abhängig von der im Subgraph-Manifest angegebenen `apiVersion`.
-## Features
+## Besonderheiten
-### New functionality
+### Neue Funktionalität
- `TypedArray` kann nun aus `ArrayBuffer` mit Hilfe der [neuen statischen Methode `wrap`](https://www.assemblyscript.org/stdlib/typedarray.html#static-members) ([v0.8.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.8.1)) erstellt werden
- Neue Standard-Bibliotheksfunktionen: `String#toUpperCase`, `String#toLowerCase`, `String#localeCompare`und `TypedArray#set` ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0))
@@ -30,39 +30,39 @@ Diese Anleitung gilt für alle, die `graph-cli`/`graph-ts` unter Version `0.22.0
- Hinzufügen von `toUTCString` für `Date` ([v0.18.30](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.30))
- Hinzufügen von `nonnull/NonNullable` integrierten Typ ([v0.19.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.19.2))
-### Optimizations
+### Optimierungen
- `Math`-Funktionen wie `exp`, `exp2`, `log`, `log2` und `pow` wurden durch schnellere Varianten ersetzt ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0))
- Leicht optimierte `Math.mod` ([v0.17.1](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.1))
- Mehr Feldzugriffe in std Map und Set zwischengespeichert ([v0.17.8](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.17.8))
- Optimieren für Zweierpotenzen in `ipow32/64` ([v0.18.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.18.2))
-### Other
+### Sonstiges
- Der Typ eines Array-Literal kann nun aus seinem Inhalt abgeleitet werden ([v0.9.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.0))
- stdlib auf Unicode 13.0.0 ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0)) aktualisiert
-## How to upgrade?
+## Wie kann man upgraden?
-1. Ändern Sie Ihre Mappings `apiVersion` in `subgraph.yaml` auf `0.0.6`:
+1. Ändern Sie Ihre Mappings `apiVersion` in `subgraph.yaml` auf `0.0.9`:
```yaml
...
dataSources:
...
- mapping:
+ Kartierung:
...
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
...
```
2. Aktualisieren Sie die `graph-cli`, die Sie verwenden, auf die `latest` Version, indem Sie sie ausführen:
```bash
-# if you have it globally installed
+# wenn es global installiert ist
npm install --global @graphprotocol/graph-cli@latest
-# or in your subgraph if you have it as a dev dependency
+# oder in Ihrem Subgrafen, wenn Sie es als Entwicklerabhängigkeit haben
npm install --save-dev @graphprotocol/graph-cli@latest
```
@@ -72,14 +72,14 @@ npm install --save-dev @graphprotocol/graph-cli@latest
npm install --save @graphprotocol/graph-ts@latest
```
-4. Follow the rest of the guide to fix the language breaking changes.
+4. Befolgen Sie den Rest der Anleitung, um die Sprachänderungen zu beheben.
5. Führen Sie `codegen` und `deploy` erneut aus.
-## Breaking changes
+## Einschneidende Veränderungen
-### Nullability
+### Nullbarkeit
-On the older version of AssemblyScript, you could create code like this:
+In der älteren Version von AssemblyScript konnten Sie Code wie diesen erstellen:
```typescript
function load(): Value | null { ... }
@@ -88,7 +88,7 @@ let maybeValue = load();
maybeValue.aMethod();
```
-However on the newer version, because the value is nullable, it requires you to check, like this:
+Da der Wert in der neueren Version jedoch nullbar ist, müssen Sie dies wie folgt überprüfen:
```typescript
let maybeValue = load()
@@ -98,17 +98,17 @@ if (maybeValue) {
}
```
-Or force it like this:
+Oder erzwingen Sie es wie folgt:
```typescript
-let maybeValue = load()! // breaks in runtime if value is null
+let maybeValue = load()! // bricht zur Laufzeit ab, wenn der Wert null ist
maybeValue.aMethod()
```
-If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you subgraph handler.
+Wenn Sie unsicher sind, welche Sie wählen sollen, empfehlen wir Ihnen, immer die sichere Variante zu verwenden. Wenn der Wert nicht vorhanden ist, sollten Sie einfach eine frühe if-Anweisung mit einem Return in Ihrem Subgraf-Handler ausführen.
-### Variable Shadowing
+### Variable Beschattung
Früher konnte man [variable shadowing](https://en.wikipedia.org/wiki/Variable_shadowing) machen und Code wie dieser würde funktionieren:
@@ -118,7 +118,7 @@ let b = 20
let a = a + b
```
-However now this isn't possible anymore, and the compiler returns this error:
+Jetzt ist dies jedoch nicht mehr möglich und der Compiler gibt diesen Fehler zurück:
```typescript
ERROR TS2451: Cannot redeclare block-scoped variable 'a'
@@ -128,11 +128,11 @@ ERROR TS2451: Cannot redeclare block-scoped variable 'a'
in assembly/index.ts(4,3)
```
-You'll need to rename your duplicate variables if you had variable shadowing.
+Sie müssen Ihre doppelten Variablen umbenennen, wenn Sie Variable Beschattung verwendet haben.
-### Null Comparisons
+### Null-Vergleiche
-By doing the upgrade on your subgraph, sometimes you might get errors like these:
+Wenn Sie das Upgrade für Ihren Subgrafen durchführen, können manchmal solche Fehler wie diese auftreten:
```typescript
ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'.
@@ -151,7 +151,7 @@ Zur Lösung des Problems können Sie die `if`-Anweisung einfach wie folgt änder
if (decimals === null) {
```
-The same applies if you're doing != instead of ==.
+Dasselbe gilt, wenn Sie != statt == verwenden.
### Casting
@@ -162,15 +162,15 @@ let byteArray = new ByteArray(10)
let uint8Array = byteArray as Uint8Array // equivalent to: byteArray
```
-However this only works in two scenarios:
+Dies funktioniert jedoch nur in zwei Szenarien:
- Primitives Casting (zwischen Typen wie `u8`, `i32`, `bool`; z. B.: `let b: isize = 10; b as usize`);
-- Upcasting on class inheritance (subclass → superclass)
+- Upcasting bei der Klassenvererbung (subclass → superclass)
Beispiele:
```typescript
-// primitive casting
+// primitives Casting
let a: usize = 10
let b: isize = 5
let c: usize = a + (b as usize)
@@ -186,8 +186,8 @@ let bytes = new Bytes(2)
Es gibt zwei Szenarien, in denen man casten möchte, aber die Verwendung von `as`/`var` **ist nicht sicher**:
-- Downcasting on class inheritance (superclass → subclass)
-- Between two types that share a superclass
+- Downcasting bei der Klassenvererbung (superclass → subclass)
+- Zwischen zwei Typen, die eine gemeinsame Oberklasse haben
```typescript
// Downcasting bei Klassenvererbung
@@ -228,11 +228,11 @@ changetype(bytes) // funktioniert :)
Wenn Sie nur die Nullbarkeit entfernen wollen, können Sie weiterhin den `as`-Operator (oder `variable`) verwenden, aber stellen Sie sicher, dass Sie wissen, dass der Wert nicht Null sein kann, sonst bricht es.
```typescript
-// remove nullability
+// die NULL-Zulässigkeit entfernen
let previousBalance = AccountBalance.load(balanceId) // AccountBalance | null
if (previousBalance != null) {
- return previousBalance as AccountBalance // safe remove null
+ return previousBalance as AccountBalance // die NULL-Zulässigkeit sicher entfernen
}
let newBalance = new AccountBalance(balanceId)
@@ -240,14 +240,14 @@ let newBalance = new AccountBalance(balanceId)
Für den Fall der Nullbarkeit empfehlen wir, einen Blick auf die [Nullability-Check-Funktion] (https://www.assemblyscript.org/basics.html#nullability-checks) zu werfen, sie wird Ihren Code sauberer machen 🙂
-Also we've added a few more static methods in some types to ease casting, they are:
+Außerdem haben wir ein paar weitere statische Methoden in einigen Typen hinzugefügt, um das Casting zu erleichtern:
- Bytes.fromByteArray
- Bytes.fromUint8Array
- BigInt.fromByteArray
- ByteArray.fromBigInt
-### Nullability check with property access
+### Nullbarkeitsprüfung mit Eigenschaftszugriff
Um die [Nullability-Check-Funktion] (https://www.assemblyscript.org/basics.html#nullability-checks) zu verwenden, können Sie entweder `if`-Anweisungen oder den ternären Operator (`?` und `:`) wie folgt verwenden:
@@ -277,10 +277,10 @@ class Container {
let container = new Container()
container.data = 'data'
-let somethingOrElse: string = container.data ? container.data : 'else' // doesn't compile
+let somethingOrElse: string = container.data ? container.data : 'else' // lässt sich nicht kompilieren
```
-Which outputs this error:
+Das gibt folgenden Fehler aus:
```typescript
ERROR TS2322: Type '~lib/string/String | null' is not assignable to type '~lib/string/String'.
@@ -301,12 +301,12 @@ container.data = 'data'
let data = container.data
-let somethingOrElse: string = data ? data : 'else' // compiles just fine :)
+let somethingOrElse: string = data ? data : 'else' // lässt sich prima kompilieren :)
```
-### Operator overloading with property access
+### Operator-Überlastung mit Eigenschaftszugriff
-If you try to sum (for example) a nullable type (from a property access) with a non nullable one, the AssemblyScript compiler instead of giving a compile time error warning that one of the values is nullable, it just compiles silently, giving chance for the code to break at runtime.
+Wenn Sie versuchen, (z.B.) einen Typ, der NULL-Werte (aus einem Eigenschaftszugriff) zulässt, mit einem Typ, der keine NULL-Werte zulässt, zu summieren, gibt der AssemblyScript-Compiler keine Fehlermeldung aus, dass einer der Werte NULL-Werte zulässt, sondern kompiliert es einfach stillschweigend, so dass die Möglichkeit besteht, dass der Code zur Laufzeit nicht funktioniert.
```typescript
class BigInt extends Uint8Array {
@@ -323,14 +323,14 @@ class Wrapper {
let x = BigInt.fromI32(2)
let y: BigInt | null = null
-x + y // give compile time error about nullability
+x + y // gibt Kompilierzeitfehler über die Nullbarkeit
let wrapper = new Wrapper(y)
-wrapper.n = wrapper.n + x // doesn't give compile time errors as it should
+wrapper.n = wrapper.n + x // gibt keine Kompilierzeitfehler, wie es sollte
```
-We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your subgraph mappings, you should change them to do a null check before it.
+Wir haben diesbezüglich ein Problem mit dem AssemblyScript-Compiler eröffnet. Wenn Sie diese Art von Vorgängen jedoch in Ihren Subgraf-Zuordnungen ausführen, sollten Sie sie zunächst so ändern, dass zuvor eine Nullprüfung durchgeführt wird.
```typescript
let wrapper = new Wrapper(y)
@@ -339,12 +339,12 @@ if (!wrapper.n) {
wrapper.n = BigInt.fromI32(0)
}
-wrapper.n = wrapper.n + x // now `n` is guaranteed to be a BigInt
+wrapper.n = wrapper.n + x // jetzt ist `n` garantiert ein BigInt
```
-### Value initialization
+### Wert-Initialisierung
-If you have any code like this:
+Wenn Sie einen Code wie diesen haben:
```typescript
var value: Type // null
@@ -352,7 +352,7 @@ value.x = 10
value.y = 'content'
```
-It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your subgraph has initialized their values, like this:
+Es wird zwar kompiliert, bricht aber zur Laufzeit ab. Dies liegt daran, dass der Wert nicht initialisiert wurde. Stellen Sie daher sicher, dass Ihr Subgraf seine Werte initialisiert hat, etwa so:
```typescript
var value = new Type() // initialized
@@ -360,7 +360,7 @@ value.x = 10
value.y = 'content'
```
-Also if you have nullable properties in a GraphQL entity, like this:
+Auch wenn Sie nullfähige Eigenschaften in einer GraphQL-Entität haben, gehen Sie wie folgt vor:
```graphql
type Total @entity {
@@ -369,7 +369,7 @@ type Total @entity {
}
```
-And you have code similar to this:
+Und Sie haben einen ähnlichen Code wie diesen:
```typescript
let total = Total.load('latest')
@@ -407,15 +407,15 @@ type Total @entity {
let total = Total.load('latest')
if (total === null) {
- total = new Total('latest') // already initializes non-nullable properties
+ total = new Total('latest') // initialisiert bereits Eigenschaften, die keine NULL-Werte zulassen
}
total.amount = total.amount + BigInt.fromI32(1)
```
-### Class property initialization
+### Initialisierung von Klasseneigenschaften
-If you export any classes with properties that are other classes (declared by you or by the standard library) like this:
+Wenn Sie Klassen mit Eigenschaften exportieren, die andere Klassen sind (von Ihnen selbst oder von der Standardbibliothek deklariert), dann ist dies der Fall:
```typescript
class Thing {}
@@ -432,7 +432,7 @@ export class Something {
constructor(public value: Thing) {}
}
-// oder
+// or
export class Something {
value: Thing
@@ -442,7 +442,7 @@ export class Something {
}
}
-// oder
+// or
export class Something {
value!: Thing
@@ -459,7 +459,7 @@ let arr = new Array(5) // ["", "", "", "", ""]
arr.push('something') // ["", "", "", "", "", "something"] // size 6 :(
```
-Depending on the types you're using, eg nullable ones, and how you're accessing them, you might encounter a runtime error like this one:
+Je nach den Typen, die Sie verwenden (z. B. nullbare Typen) und wie Sie darauf zugreifen, kann es zu einem Laufzeitfehler wie diesem kommen:
```
ERRO Handler skipped due to execution failure, error: Mapping aborted at ~lib/array.ts, line 110, column 40, with message: Element type must be nullable if array is holey wasm backtrace: 0: 0x19c4 - !~lib/@graphprotocol/graph-ts/index/format 1: 0x1e75 - !~lib/@graphprotocol/graph-ts/common/collections/Entity#constructor 2: 0x30b9 - !node_modules/@graphprotocol/graph-ts/global/global/id_of_type
@@ -473,7 +473,7 @@ let arr = new Array(0) // []
arr.push('something') // ["something"]
```
-Or you should mutate it via index:
+Oder Sie sollten es per Index mutieren:
```typescript
let arr = new Array(5) // ["", "", "", "", ""]
@@ -481,11 +481,11 @@ let arr = new Array(5) // ["", "", "", "", ""]
arr[0] = 'something' // ["something", "", "", "", ""]
```
-### GraphQL schema
+### GraphQL-Schema
Dies ist keine direkte AssemblyScript-Änderung, aber Sie müssen möglicherweise Ihre Datei `schema.graphql` aktualisieren.
-Now you no longer can define fields in your types that are Non-Nullable Lists. If you have a schema like this:
+Jetzt können Sie in Ihren Typen keine Felder mehr definieren, die nicht nullbare Listen sind. Wenn Sie über ein Schema wie dieses verfügen:
```graphql
type Something @entity {
@@ -513,7 +513,7 @@ type MyEntity @entity {
Dies hat sich aufgrund von Unterschieden in der Nullbarkeit zwischen AssemblyScript-Versionen geändert und hängt mit der Datei `src/generated/schema.ts` (Standardpfad, vielleicht haben Sie diesen geändert) zusammen.
-### Other
+### Sonstiges
- `Map#set` und `Set#add` wurden an die Spezifikation angepasst und geben `this` zurück ([v0.9.2](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.9.2))
- Arrays erben nicht mehr von ArrayBufferView, sondern sind jetzt eigenständig ([v0.10.0](https://github.com/AssemblyScript/assemblyscript/releases/tag/v0.10.0))
diff --git a/website/src/pages/de/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/de/resources/migration-guides/graphql-validations-migration-guide.mdx
index 68c70b711a60..a0b114383280 100644
--- a/website/src/pages/de/resources/migration-guides/graphql-validations-migration-guide.mdx
+++ b/website/src/pages/de/resources/migration-guides/graphql-validations-migration-guide.mdx
@@ -1,62 +1,62 @@
---
-title: GraphQL Validations Migration Guide
+title: Anleitung zur Migration von GraphQL-Validierungen
---
-Soon `graph-node` will support 100% coverage of the [GraphQL Validations specification](https://spec.graphql.org/June2018/#sec-Validation).
+Bald wird „graph-node“ eine 100-prozentige Abdeckung der [GraphQL Validations-Spezifikation](https://spec.graphql.org/June2018/#sec-Validation) unterstützen.
-Previous versions of `graph-node` did not support all validations and provided more graceful responses - so, in cases of ambiguity, `graph-node` was ignoring invalid GraphQL operations components.
+Frühere Versionen von „graph-node“ unterstützten nicht alle Validierungen und lieferten optimierte Antworten – daher ignorierte „graph-node“ bei Unklarheiten ungültige GraphQL-Operationskomponenten.
-GraphQL Validations support is the pillar for the upcoming new features and the performance at scale of The Graph Network.
+Die Unterstützung von GraphQL-Validierungen ist die Grundlage für die kommenden neuen Funktionen und die umfassende Leistung von The Graph Network.
-It will also ensure determinism of query responses, a key requirement on The Graph Network.
+Dadurch wird auch der Determinismus der Abfrageantworten sichergestellt, eine wichtige Anforderung für The Graph Network.
-**Enabling the GraphQL Validations will break some existing queries** sent to The Graph API.
+**Durch die Aktivierung der GraphQL-Validierungen werden einige vorhandene Abfragen unterbrochen,** die an die Graph-API gesendet werden.
-To be compliant with those validations, please follow the migration guide.
+Um diese Validierungen einzuhalten, befolgen Sie bitte den Migrationsleitfaden.
-> ⚠️ If you do not migrate your queries before the validations are rolled out, they will return errors and possibly break your frontends/clients.
+> ⚠️ Wenn Sie Ihre Abfragen nicht migrieren, bevor die Validierungen eingeführt werden, werden Fehler zurückgegeben und möglicherweise Ihre Frontends/Clients beschädigt.
-## Migration guide
+## Migrationsleitfaden
-You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries.
+Mit dem CLI-Migrationstool können Sie Probleme in Ihren GraphQL-Vorgängen finden und beheben. Alternativ können Sie den Endpunkt Ihres GraphQL-Clients aktualisieren, um den Endpunkt „https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME“ zu verwenden. Wenn Sie Ihre Abfragen anhand dieses Endpunkts testen, können Sie die Probleme in Ihren Abfragen leichter finden.
-> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid.
+> Nicht alle Subgrafen müssen migriert werden, wenn Sie [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) oder [GraphQL Code Generator](https://the-guild.dev/graphql/codegen) verwenden, stellen sie bereits sicher, dass Ihre Abfragen gültig sind.
-## Migration CLI tool
+## Migrations-CLI-Tool
-**Most of the GraphQL operations errors can be found in your codebase ahead of time.**
+**Die meisten GraphQL-Operationsfehler können im Voraus in Ihrer Codebasis gefunden werden.**
-For this reason, we provide a smooth experience for validating your GraphQL operations during development or in CI.
+Aus diesem Grund bieten wir eine reibungslose Validierung Ihrer GraphQL-Operationen während der Entwicklung oder im CI.
-[`@graphql-validate/cli`](https://github.com/saihaj/graphql-validate) is a simple CLI tool that helps validate GraphQL operations against a given schema.
+[`@graphql-validate/cli`](https://github.com/saihaj/graphql-validate) ist ein einfaches CLI-Tool, das bei der Validierung von GraphQL-Operationen anhand eines bestimmten Schemas hilft.
-### **Getting started**
+### **Erste Schritte**
-You can run the tool as follows:
+Sie können das Tool wie folgt ausführen:
```bash
npx @graphql-validate/cli -s https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME -o *.graphql
```
-**Notes:**
+**Anmerkungen:**
-- Set or replace $GITHUB_USER, $SUBGRAPH_NAME with the appropriate values. Like: [`artblocks/art-blocks`](https://api.thegraph.com/subgraphs/name/artblocks/art-blocks)
-- The preview schema URL (https://api-next.thegraph.com/) provided is heavily rate-limited and will be sunset once all users have migrated to the new version. **Do not use it in production.**
-- Operations are identified in files with the following extensions [`.graphql`,](https://www.graphql-tools.com/docs/schema-loading#graphql-file-loader)[`.ts`, `.tsx`, `.js`, `jsx`](https://www.graphql-tools.com/docs/schema-loading#code-file-loader) (`-o` option).
+- Setzen oder ersetzen Sie $GITHUB_USER, $SUBGRAPH_NAME durch die entsprechenden Werte. Wie z.B.: [`artblocks/art-blocks`](https://api.thegraph.com/subgraphs/name/artblocks/art-blocks)
+- Die bereitgestellte Vorschau-Schema-URL (https://api-next.thegraph.com/) ist stark ratenbeschränkt und wird eingestellt, sobald alle Benutzer auf die neue Version migrieren werden. **Verwenden Sie es nicht in der Produktion.**
+- Operationen werden in Dateien mit den folgenden Erweiterungen identifiziert [`.graphql`,](https://www.graphql-tools.com/docs/schema-loading#graphql-file-loader)[`.ts`, `.tsx`, `.js`, `jsx`](https://www.graphql-tools.com/docs/schema-loading#code-file-loader) (`-o` option).
-### CLI output
+### CLI-Ausgabe
-The `[@graphql-validate/cli](https://github.com/saihaj/graphql-validate)` CLI tool will output any GraphQL operations errors as follows:
+Das CLI-Tool „[@graphql-validate/cli](https://github.com/saihaj/graphql-validate)“ gibt alle GraphQL-Operationsfehler wie folgt aus:

-For each error, you will find a description, file path and position, and a link to a solution example (see the following section).
+Zu jedem Fehler finden Sie eine Beschreibung, Dateipfad und -position sowie einen Link zu einem Lösungsbeispiel (siehe folgenden Abschnitt).
-## Run your local queries against the preview schema
+## Führen Sie Ihre lokalen Abfragen anhand des Vorschauschemas aus
-We provide an endpoint `https://api-next.thegraph.com/` that runs a `graph-node` version that has validations turned on.
+Wir stellen einen Endpunkt „https://api-next.thegraph.com/“ bereit, der eine „graph-node“-Version ausführt, bei der Validierungen aktiviert sind.
-You can try out queries by sending them to:
+Sie können Abfragen ausprobieren, indem Sie diese an folgende Adresse senden:
- `https://api-next.thegraph.com/subgraphs/id/`
@@ -64,28 +64,28 @@ oder
- `https://api-next.thegraph.com/subgraphs/name//