diff --git a/website/src/pages/ar/about.mdx b/website/src/pages/ar/about.mdx
index 8005f34aef5f..93dbeb51f658 100644
--- a/website/src/pages/ar/about.mdx
+++ b/website/src/pages/ar/about.mdx
@@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block
## The Graph Provides a Solution
-The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API.
+The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API.
Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process.
### How The Graph Functions
-Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
+Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
#### Specifics
-- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph.
+- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph.
-- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
+- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
-- When creating a subgraph, you need to write a subgraph manifest.
+- When creating a Subgraph, you need to write a Subgraph manifest.
-- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph.
+- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph.
-The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions.
+The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions.

@@ -56,12 +56,12 @@ The diagram below provides more detailed information about the flow of data afte
1. A dapp adds data to Ethereum through a transaction on a smart contract.
2. العقد الذكي يصدر حدثا واحدا أو أكثر أثناء معالجة الإجراء.
-3. يقوم الـ Graph Node بمسح الـ Ethereum باستمرار بحثا عن الكتل الجديدة وبيانات الـ subgraph الخاص بك.
-4. يعثر الـ Graph Node على أحداث الـ Ethereum لـ subgraph الخاص بك في هذه الكتل ويقوم بتشغيل mapping handlers التي قدمتها. الـ mapping عبارة عن وحدة WASM والتي تقوم بإنشاء أو تحديث البيانات التي يخزنها Graph Node استجابة لأحداث الـ Ethereum.
+3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain.
+4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events.
5. The dapp queries the Graph Node for data indexed from the blockchain, using the node's [GraphQL endpoint](https://graphql.org/learn/). The Graph Node in turn translates the GraphQL queries into queries for its underlying data store in order to fetch this data, making use of the store's indexing capabilities. The dapp displays this data in a rich UI for end-users, which they use to issue new transactions on Ethereum. The cycle repeats.
## الخطوات التالية
-The following sections provide a more in-depth look at subgraphs, their deployment and data querying.
+The following sections provide a more in-depth look at Subgraphs, their deployment and data querying.
-Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data.
+Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data.
diff --git a/website/src/pages/ar/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/ar/archived/arbitrum/arbitrum-faq.mdx
index 898175b05cad..e1dbbea03383 100644
--- a/website/src/pages/ar/archived/arbitrum/arbitrum-faq.mdx
+++ b/website/src/pages/ar/archived/arbitrum/arbitrum-faq.mdx
@@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from:
- Security inherited from Ethereum
-Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas.
+Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas.
The Graph community decided to move forward with Arbitrum last year after the outcome of the [GIP-0031](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) discussion.
@@ -39,7 +39,7 @@ To take advantage of using The Graph on L2, use this dropdown switcher to toggle

-## As a subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now?
+## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now?
Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support.
@@ -51,9 +51,9 @@ All smart contracts have been thoroughly [audited](https://github.com/graphproto
Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20).
-## Are existing subgraphs on Ethereum working?
+## Are existing Subgraphs on Ethereum working?
-All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly.
+All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly.
## Does GRT have a new smart contract deployed on Arbitrum?
diff --git a/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-faq.mdx
index 9c949027b41f..965c96f7355a 100644
--- a/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-faq.mdx
+++ b/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-faq.mdx
@@ -24,9 +24,9 @@ The exception is with smart contract wallets like multisigs: these are smart con
The L2 Transfer Tools use Arbitrum’s native mechanism to send messages from L1 to L2. This mechanism is called a “retryable ticket” and is used by all native token bridges, including the Arbitrum GRT bridge. You can read more about retryable tickets in the [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging).
-When you transfer your assets (subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum).
+When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum).
-This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you.
+This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you.
### I started my delegation/stake/curation transfer and I'm not sure if it made it through to L2, how can I confirm that it was transferred correctly?
@@ -36,43 +36,43 @@ If you have the L1 transaction hash (which you can find by looking at the recent
## نقل الـ Subgraph (الرسم البياني الفرعي)
-### كيفكيف أقوم بتحويل الـ subgraph الخاص بي؟
+### How do I transfer my Subgraph?
-لنقل الـ subgraph الخاص بك ، ستحتاج إلى إكمال الخطوات التالية:
+To transfer your Subgraph, you will need to complete the following steps:
1. ابدأ التحويل على شبكة Ethereum mainnet
2. انتظر 20 دقيقة للتأكيد
-3. قم بتأكيد نقل الـ subgraph على Arbitrum \ \*
+3. Confirm Subgraph transfer on Arbitrum\*
-4. قم بإنهاء نشر الـ subgraph على Arbitrum
+4. Finish publishing Subgraph on Arbitrum
5. جدث عنوان URL للاستعلام (مستحسن)
-\*Note that you must confirm the transfer within 7 days otherwise your subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol).
+\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol).
### من أين يجب أن أبدأ التحويل ؟
-يمكنك بدء عملية النقل من [Subgraph Studio] (https://thegraph.com/studio/) ، [Explorer ،] (https://thegraph.com/explorer) أو من أي صفحة تفاصيل subgraph. انقر فوق الزر "Transfer Subgraph" في صفحة تفاصيل الرسم الـ subgraph لبدء النقل.
+You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer.
-### كم من الوقت سأنتظر حتى يتم نقل الـ subgraph الخاص بي
+### How long do I need to wait until my Subgraph is transferred
يستغرق وقت النقل حوالي 20 دقيقة. يعمل جسر Arbitrum في الخلفية لإكمال نقل الجسر تلقائيًا. في بعض الحالات ، قد ترتفع تكاليف الغاز وستحتاج إلى تأكيد المعاملة مرة أخرى.
-### هل سيظل الـ subgraph قابلاً للاكتشاف بعد أن أنقله إلى L2؟
+### Will my Subgraph still be discoverable after I transfer it to L2?
-سيكون الـ subgraph الخاص بك قابلاً للاكتشاف على الشبكة التي تم نشرها عليها فقط. على سبيل المثال ، إذا كان الـ subgraph الخاص بك موجودًا على Arbitrum One ، فيمكنك العثور عليه فقط في Explorer على Arbitrum One ولن تتمكن من العثور عليه على Ethereum. يرجى التأكد من تحديد Arbitrum One في مبدل الشبكة في أعلى الصفحة للتأكد من أنك على الشبكة الصحيحة. بعد النقل ، سيظهر الـ L1 subgraph على أنه مهمل.
+Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network. After the transfer, the L1 Subgraph will appear as deprecated.
-### هل يلزم نشر الـ subgraph الخاص بي لنقله؟
+### Does my Subgraph need to be published to transfer it?
-للاستفادة من أداة نقل الـ subgraph ، يجب أن يكون الرسم البياني الفرعي الخاص بك قد تم نشره بالفعل على شبكة Ethereum الرئيسية ويجب أن يكون لديه إشارة تنسيق مملوكة للمحفظة التي تمتلك الرسم البياني الفرعي. إذا لم يتم نشر الرسم البياني الفرعي الخاص بك ، فمن المستحسن أن تقوم ببساطة بالنشر مباشرة على Arbitrum One - ستكون رسوم الغاز أقل بكثير. إذا كنت تريد نقل رسم بياني فرعي منشور ولكن حساب المالك لا يملك إشارة تنسيق عليه ، فيمكنك الإشارة بمبلغ صغير (على سبيل المثال 1 GRT) من ذلك الحساب ؛ تأكد من اختيار إشارة "auto-migrating".
+To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal.
-### ماذا يحدث لإصدار Ethereum mainnet للرسم البياني الفرعي الخاص بي بعد أن النقل إلى Arbitrum؟
+### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum?
-بعد نقل الرسم البياني الفرعي الخاص بك إلى Arbitrum ، سيتم إهمال إصدار Ethereum mainnet. نوصي بتحديث عنوان URL للاستعلام في غضون 48 ساعة. ومع ذلك ، هناك فترة سماح تحافظ على عمل عنوان URL للشبكة الرئيسية الخاصة بك بحيث يمكن تحديث أي دعم dapp لجهة خارجية.
+After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated.
### بعد النقل ، هل أحتاج أيضًا إلى إعادة النشر على Arbitrum؟
@@ -80,21 +80,21 @@ If you have the L1 transaction hash (which you can find by looking at the recent
### Will my endpoint experience downtime while re-publishing?
-It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the subgraph on L1 and whether they keep indexing it until the subgraph is fully supported on L2.
+It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2.
### هل يتم نشر وتخطيط الإصدار بنفس الطريقة في الـ L2 كما هو الحال في شبكة Ethereum Ethereum mainnet؟
-Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the subgraph.
+Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph.
-### هل سينتقل تنسيق الـ subgraph مع الـ subgraph ؟
+### Will my Subgraph's curation move with my Subgraph?
-إذا اخترت إشارة الترحيل التلقائي auto-migrating ، فسيتم نقل 100٪ من التنسيق مع الرسم البياني الفرعي الخاص بك إلى Arbitrum One. سيتم تحويل كل إشارة التنسيق الخاصة بالرسم الفرعي إلى GRT في وقت النقل ، وسيتم استخدام GRT المقابل لإشارة التنسيق الخاصة بك لصك الإشارة على L2 subgraph.
+If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph.
-يمكن للمنسقين الآخرين اختيار ما إذا كانوا سيسحبون أجزاء من GRT ، أو ينقلونه أيضًا إلى L2 لإنتاج إشارة على نفس الرسم البياني الفرعي.
+Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph.
-### هل يمكنني إعادة الرسم البياني الفرعي الخاص بي إلى Ethereum mainnet بعد أن أقوم بالنقل؟
+### Can I move my Subgraph back to Ethereum mainnet after I transfer?
-بمجرد النقل ، سيتم إهمال إصدار شبكة Ethereum mainnet للرسم البياني الفرعي الخاص بك. إذا كنت ترغب في العودة إلى mainnet ، فستحتاج إلى إعادة النشر (redeploy) والنشر مرة أخرى على mainnet. ومع ذلك ، لا يُنصح بشدة بالتحويل مرة أخرى إلى شبكة Ethereum mainnet حيث سيتم في النهاية توزيع مكافآت الفهرسة بالكامل على Arbitrum One.
+Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One.
### لماذا أحتاج إلى Bridged ETH لإكمال النقل؟
@@ -206,19 +206,19 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans
\ \* إذا لزم الأمر -أنت تستخدم عنوان عقد.
-### كيف سأعرف ما إذا كان الرسم البياني الفرعي الذي قمت بعمل إشارة تنسيق عليه قد انتقل إلى L2؟
+### How will I know if the Subgraph I curated has moved to L2?
-عند عرض صفحة تفاصيل الرسم البياني الفرعي ، ستعلمك لافتة بأنه تم نقل هذا الرسم البياني الفرعي. يمكنك اتباع التعليمات لنقل إشارة التنسيق الخاص بك. يمكنك أيضًا العثور على هذه المعلومات في صفحة تفاصيل الرسم البياني الفرعي لأي رسم بياني فرعي تم نقله.
+When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved.
### ماذا لو كنت لا أرغب في نقل إشارة التنسيق الخاص بي إلى L2؟
-عندما يتم إهمال الرسم البياني الفرعي ، يكون لديك خيار سحب الإشارة. وبالمثل ، إذا انتقل الرسم البياني الفرعي إلى L2 ، فيمكنك اختيار سحب الإشارة في شبكة Ethereum الرئيسية أو إرسال الإشارة إلى L2.
+When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2.
### كيف أعرف أنه تم نقل إشارة التنسيق بنجاح؟
يمكن الوصول إلى تفاصيل الإشارة عبر Explorer بعد حوالي 20 دقيقة من بدء أداة النقل للـ L2.
-### هل يمكنني نقل إشاة التنسيق الخاص بي على أكثر من رسم بياني فرعي في وقت واحد؟
+### Can I transfer my curation on more than one Subgraph at a time?
لا يوجد خيار كهذا حالياً.
@@ -266,7 +266,7 @@ The tokens that are being undelegated are "locked" and therefore cannot be trans
### هل يجب أن أقوم بالفهرسة على Arbitrum قبل أن أنقل حصتي؟
-يمكنك تحويل حصتك بشكل فعال أولاً قبل إعداد الفهرسة ، ولكن لن تتمكن من المطالبة بأي مكافآت على L2 حتى تقوم بتخصيصها لـ subgraphs على L2 وفهرستها وعرض POIs.
+You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs.
### هل يستطيع المفوضون نقل تفويضهم قبل نقل indexing stake الخاص بي؟
diff --git a/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-guide.mdx
index af5a133538d6..5863ff2de0a2 100644
--- a/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-guide.mdx
+++ b/website/src/pages/ar/archived/arbitrum/l2-transfer-tools-guide.mdx
@@ -6,53 +6,53 @@ title: L2 Transfer Tools Guide
Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them.
-## كيف تنقل الغراف الفرعي الخاص بك إلى شبكة آربترم (الطبقة الثانية)
+## How to transfer your Subgraph to Arbitrum (L2)
-## فوائد نقل الغراف الفرعي الخاصة بك
+## Benefits of transferring your Subgraphs
مجتمع الغراف والمطورون الأساسيون كانوا [يستعدون] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) للإنتقال إلى آربترم على مدى العام الماضي. وتعتبر آربترم سلسلة كتل من الطبقة الثانية أو "L2"، حيث ترث الأمان من سلسلة الإيثيريوم ولكنها توفر رسوم غازٍ أقل بشكلٍ كبير.
-عندما تقوم بنشر أو ترقية الغرافات الفرعية الخاصة بك إلى شبكة الغراف، فأنت تتفاعل مع عقودٍ ذكيةٍ في البروتوكول وهذا يتطلب دفع رسوم الغاز باستخدام عملة الايثيريوم. من خلال نقل غرافاتك الفرعية إلى آربترم، فإن أي ترقيات مستقبلية لغرافك الفرعي ستتطلب رسوم غازٍ أقل بكثير. الرسوم الأقل، وكذلك حقيقة أن منحنيات الترابط التنسيقي على الطبقة الثانية مستقيمة، تجعل من الأسهل على المنسِّقين الآخرين تنسيق غرافك الفرعي، ممّا يزيد من مكافآت المفهرِسين على غرافك الفرعي. هذه البيئة ذات التكلفة-الأقل كذلك تجعل من الأرخص على المفهرسين أن يقوموا بفهرسة وخدمة غرافك الفرعي. سوف تزداد مكافآت الفهرسة على آربترم وتتناقص على شبكة إيثيريوم الرئيسية على مدى الأشهر المقبلة، لذلك سيقوم المزيد والمزيد من المُفَهرِسين بنقل ودائعهم المربوطة وتثبيت عملياتهم على الطبقة الثانية.
+When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2.
-## فهم ما يحدث مع الإشارة وغرافك الفرعي على الطبقة الأولى وعناوين مواقع الإستعلام
+## Understanding what happens with signal, your L1 Subgraph and query URLs
-عند نقل سبجراف إلى Arbitrum، يتم استخدام جسر Arbitrum GRT، الذي بدوره يستخدم جسر Arbitrum الأصلي، لإرسال السبجراف إلى L2. سيؤدي عملية "النقل" إلى إهمال السبجراف على شبكة الإيثيريوم الرئيسية وإرسال المعلومات لإعادة إنشاء السبجراف على L2 باستخدام الجسر. ستتضمن أيضًا رصيد GRT المرهون المرتبط بمالك السبجراف، والذي يجب أن يكون أكبر من الصفر حتى يقبل الجسر النقل.
+Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer.
-عندما تختار نقل الرسم البياني الفرعي ، سيؤدي ذلك إلى تحويل جميع إشارات التنسيق الخاصة بالرسم الفرعي إلى GRT. هذا يعادل "إهمال" الرسم البياني الفرعي على الشبكة الرئيسية. سيتم إرسال GRT المستخدمة لعملية التنسيق الخاصة بك إلى L2 جمباً إلى جمب مع الرسم البياني الفرعي ، حيث سيتم استخدامها لإنتاج الإشارة نيابة عنك.
+When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf.
-يمكن للمنسقين الآخرين اختيار ما إذا كانوا سيسحبون جزء من GRT الخاص بهم ، أو نقله أيضًا إلى L2 لصك إشارة على نفس الرسم البياني الفرعي. إذا لم يقم مالك الرسم البياني الفرعي بنقل الرسم البياني الفرعي الخاص به إلى L2 وقام بإيقافه يدويًا عبر استدعاء العقد ، فسيتم إخطار المنسقين وسيتمكنون من سحب تنسيقهم.
+Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation.
-بمجرد نقل الرسم البياني الفرعي ، لن يتلقى المفهرسون بعد الآن مكافآت لفهرسة الرسم البياني الفرعي، نظرًا لأنه يتم تحويل كل التنسيق لـ GRT. ومع ذلك ، سيكون هناك مفهرسون 1) سيستمرون في خدمة الرسوم البيانية الفرعية المنقولة لمدة 24 ساعة ، و 2) سيبدأون فورًا في فهرسة الرسم البياني الفرعي على L2. ونظرًا لأن هؤلاء المفهرسون لديهم بالفعل رسم بياني فرعي مفهرس ، فلا داعي لانتظار مزامنة الرسم البياني الفرعي ، وسيكون من الممكن الاستعلام عن الرسم البياني الفرعي على L2 مباشرة تقريبًا.
+As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately.
-يجب إجراء الاستعلامات على الرسم البياني الفرعي في L2 على عنوان URL مختلف (على \`` Arbitrum-gateway.thegraph.com`) ، لكن عنوان URL L1 سيستمر في العمل لمدة 48 ساعة على الأقل. بعد ذلك ، ستقوم بوابة L1 بإعادة توجيه الاستعلامات إلى بوابة L2 (لبعض الوقت) ، ولكن هذا سيضيف زمن تأخير لذلك يوصى تغيير جميع استعلاماتك إلى عنوان URL الجديد في أقرب وقت ممكن.
+Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible.
## اختيار محفظة L2 الخاصة بك
-عندما قمت بنشر subgraph الخاص بك على الشبكة الرئيسية ، فقد استخدمت محفظة متصلة لإنشاء subgraph ، وتمتلك هذه المحفظة NFT الذي يمثل هذا subgraph ويسمح لك بنشر التحديثات.
+When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates.
-عند نقل الرسم البياني الفرعي إلى Arbitrum ، يمكنك اختيار محفظة مختلفة والتي ستمتلك هذا الـ subgraph NFT على L2.
+When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2.
إذا كنت تستخدم محفظة "عادية" مثل MetaMask (حساب مملوك خارجيًا EOA ، محفظة ليست بعقد ذكي) ، فهذا اختياري ويوصى بالاحتفاظ بعنوان المالك نفسه كما في L1.
-إذا كنت تستخدم محفظة بعقد ذكي ، مثل multisig (على سبيل المثال Safe) ، فإن اختيار عنوان مختلف لمحفظة L2 أمر إلزامي ، حيث من المرجح أن هذا الحساب موجود فقط على mainnet ولن تكون قادرًا على إجراء المعاملات على Arbitrum باستخدام هذه المحفظة. إذا كنت ترغب في الاستمرار في استخدام محفظة عقد ذكية أو multisig ، فقم بإنشاء محفظة جديدة على Arbitrum واستخدم عنوانها كمالك للرسم البياني الفرعي الخاص بك على L2.
+If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph.
-** من المهم جدًا استخدام عنوان محفظة تتحكم فيه ، ويمكنه إجراء معاملات على Arbitrum. وإلا فسيتم فقد الرسم البياني الفرعي ولا يمكن استعادته. **
+**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.**
## التحضير لعملية النقل: إنشاء جسر لـبعض ETH
-يتضمن نقل الغراف الفرعي إرسال معاملة عبر الجسر ، ثم تنفيذ معاملة أخرى على شبكة أربترم. تستخدم المعاملة الأولى الإيثيريوم على الشبكة الرئيسية ، وتتضمن بعضًا من إيثيريوم لدفع ثمن الغاز عند استلام الرسالة على الطبقة الثانية. ومع ذلك ، إذا كان هذا الغاز غير كافٍ ، فسيتعين عليك إعادة إجراء المعاملة ودفع ثمن الغاز مباشرةً على الطبقة الثانية (هذه هي "الخطوة 3: تأكيد التحويل" أدناه). يجب تنفيذ هذه الخطوة ** في غضون 7 أيام من بدء التحويل **. علاوة على ذلك ، سيتم إجراء المعاملة الثانية مباشرة على شبكة أربترم ("الخطوة 4: إنهاء التحويل على الطبقة الثانية"). لهذه الأسباب ، ستحتاج بعضًا من إيثيريوم في محفظة أربترم. إذا كنت تستخدم متعدد التواقيع أو عقداً ذكياً ، فيجب أن يكون هناك بعضًا من إيثيريوم في المحفظة العادية (حساب مملوك خارجيا) التي تستخدمها لتنفيذ المعاملات ، وليس على محفظة متعددة التواقيع.
+Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself.
يمكنك شراء إيثيريوم من بعض المنصات وسحبها مباشرة إلى أربترم، أو يمكنك استخدام جسر أربترم لإرسال إيثيريوم من محفظة الشبكة الرئيسيةإلى الطبقة الثانية: [bridge.arbitrum.io] (http://bridge.arbitrum.io). نظرًا لأن رسوم الغاز على أربترم أقل ، فستحتاج فقط إلى مبلغ صغير. من المستحسن أن تبدأ بمبلغ منخفض (0 على سبيل المثال ، 01 ETH) للموافقة على معاملتك.
-## العثور على أداة نقل الغراف الفرعي
+## Finding the Subgraph Transfer Tool
-يمكنك العثور على أداة نقل L2 في صفحة الرسم البياني الفرعي الخاص بك على Subgraph Studio:
+You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio:

-إذا كنت متصلاً بالمحفظة التي تمتلك الغراف الفرعي، فيمكنك الوصول إليها عبر المستكشف، وذلك عن طريق الانتقال إلى صفحة الغراف الفرعي على المستكشف:
+It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer:

@@ -60,19 +60,19 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools
## الخطوة 1: بدء عملية النقل
-قبل بدء عملية النقل، يجب أن تقرر أي عنوان سيكون مالكًا للغراف الفرعي على الطبقة الثانية (انظر "اختيار محفظة الطبقة الثانية" أعلاه)، ويُوصَى بشدة بأن يكون لديك بعضًا من الإيثيريوم لرسوم الغاز على أربترم. يمكنك الاطلاع على (التحضير لعملية النقل: تحويل بعضًا من إيثيريوم عبر الجسر." أعلاه).
+Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above).
-يرجى أيضًا ملاحظة أن نقل الرسم البياني الفرعي يتطلب وجود كمية غير صفرية من إشارة التنسيق عليه بنفس الحساب الذي يمتلك الرسم البياني الفرعي ؛ إذا لم تكن قد أشرت إلى الرسم البياني الفرعي ، فسيتعين عليك إضافة القليل من إشارة التنسيق (يكفي إضافة مبلغ صغير مثل 1 GRT).
+Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice).
-بعد فتح أداة النقل، ستتمكن من إدخال عنوان المحفظة في الطبقة الثانية في حقل "عنوان محفظة الاستلام". تأكد من إدخال العنوان الصحيح هنا. بعد ذلك، انقر على "نقل الغراف الفرعي"، وسيتم طلب تنفيذ العملية في محفظتك. (يُرجى ملاحظة أنه يتم تضمين بعضًا من الإثيريوم لدفع رسوم الغاز في الطبقة الثانية). بعد تنفيذ العملية، سيتم بدء عملية النقل وإهمال الغراف الفرعي في الطبقة الأولى. (يمكنك الاطلاع على "فهم ما يحدث مع الإشارة والغراف الفرعي في الطبقة الأولى وعناوين الاستعلام" أعلاه لمزيد من التفاصيل حول ما يحدث خلف الكواليس).
+After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes).
-إذا قمت بتنفيذ هذه الخطوة، \*\*يجب عليك التأكد من أنك ستستكمل الخطوة 3 في غضون 7 أيام، وإلا فإنك ستفقد الغراف الفرعي والإشارة GRT الخاصة بك. يرجع ذلك إلى آلية التواصل بين الطبقة الأولى والطبقة الثانية في أربترم: الرسائل التي ترسل عبر الجسر هي "تذاكر قابلة لإعادة المحاولة" يجب تنفيذها في غضون 7 أيام، وقد يتطلب التنفيذ الأولي إعادة المحاولة إذا كان هناك زيادة في سعر الغاز على أربترم.
+If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum.

-## الخطوة 2: الانتظار حتى يتم نقل الغراف الفرعي إلى الطبقة الثانية
+## Step 2: Waiting for the Subgraph to get to L2
-بعد بدء عملية النقل، يتعين على الرسالة التي ترسل الـ subgraph من L1 إلى L2 أن يتم نشرها عبر جسر Arbitrum. يستغرق ذلك حوالي 20 دقيقة (ينتظر الجسر لكتلة الشبكة الرئيسية التي تحتوي على المعاملة حتى يتأكد أنها "آمنة" من إمكانية إعادة ترتيب السلسلة).
+After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs).
بمجرد انتهاء وقت الانتظار ، ستحاول Arbitrum تنفيذ النقل تلقائيًا على عقود L2.
@@ -80,7 +80,7 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools
## الخطوة الثالثة: تأكيد التحويل
-في معظم الحالات ، سيتم تنفيذ هذه الخطوة تلقائيًا لأن غاز الطبقة الثانية المضمن في الخطوة 1 يجب أن يكون كافيًا لتنفيذ المعاملة التي تتلقى الغراف الفرعي في عقود أربترم. ومع ذلك ، في بعض الحالات ، من الممكن أن يؤدي ارتفاع أسعار الغاز على أربترم إلى فشل هذا التنفيذ التلقائي. وفي هذه الحالة ، ستكون "التذكرة" التي ترسل غرافك الفرعي إلى الطبقة الثانية معلقة وتتطلب إعادة المحاولة في غضون 7 أيام.
+In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days.
في هذا الحالة ، فستحتاج إلى الاتصال باستخدام محفظة الطبقة الثانية والتي تحتوي بعضاً من إيثيريوم على أربترم، قم بتغيير شبكة محفظتك إلى أربترم، والنقر فوق "تأكيد النقل" لإعادة محاولة المعاملة.
@@ -88,33 +88,33 @@ Some frequent questions about these tools are answered in the [L2 Transfer Tools
## الخطوة 4: إنهاء عملية النقل على L2
-في هذه المرحلة، تم استلام الغراف الفرعي والـ GRT الخاص بك على أربترم، ولكن الغراف الفرعي لم يتم نشره بعد. ستحتاج إلى الربط باستخدام محفظة الطبقة الثانية التي اخترتها كمحفظة استلام، وتغيير شبكة محفظتك إلى أربترم، ثم النقر على "نشر الغراف الفرعي"
+At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph."
-
+
-
+
-سيؤدي هذا إلى نشر الغراف الفرعي حتى يتمكن المفهرسون الذين يعملون في أربترم بالبدء في تقديم الخدمة. كما أنه سيعمل أيضًا على إصدار إشارة التنسيق باستخدام GRT التي تم نقلها من الطبقة الأولى.
+This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1.
## Step 5: Updating the query URL
-تم نقل غرافك الفرعي بنجاح إلى أربترم! للاستعلام عن الغراف الفرعي ، سيكون عنوان URL الجديد هو:
+Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be :
`https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]`
-لاحظ أن ID الغراف الفرعي على أربترم سيكون مختلفًا عن الذي لديك في الشبكة الرئيسية، ولكن يمكنك العثور عليه في المستكشف أو استوديو. كما هو مذكور أعلاه (راجع "فهم ما يحدث للإشارة والغراف الفرعي في الطبقة الأولى وعناوين الاستعلام") سيتم دعم عنوان URL الطبقة الأولى القديم لفترة قصيرة ، ولكن يجب عليك تبديل استعلاماتك إلى العنوان الجديد بمجرد مزامنة الغراف الفرعي على الطبقة الثانية.
+Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2.
## كيفية نقل التنسيق الخاص بك إلى أربترم (الطبقة الثانية)
-## Understanding what happens to curation on subgraph transfers to L2
+## Understanding what happens to curation on Subgraph transfers to L2
-When the owner of a subgraph transfers a subgraph to Arbitrum, all of the subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a subgraph version or deployment but that follows the latest version of a subgraph.
+When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph.
-This conversion from signal to GRT is the same as what would happen if the subgraph owner deprecated the subgraph in L1. When the subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles subgraph upgrades and auto-migrated signal). Each Curator on that subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the subgraph.
+This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph.
-A fraction of these GRT corresponding to the subgraph owner is sent to L2 together with the subgraph.
+A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph.
-At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it.
+At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it.
## اختيار محفظة L2 الخاصة بك
@@ -130,9 +130,9 @@ If you're using a smart contract wallet, like a multisig (e.g. a Safe), then cho
Before starting the transfer, you must decide which address will own the curation on L2 (see "Choosing your L2 wallet" above), and it is recommended having some ETH for gas already bridged on Arbitrum in case you need to retry the execution of the message on L2. You can buy ETH on some exchanges and withdraw it directly to Arbitrum, or you can use the Arbitrum bridge to send ETH from a mainnet wallet to L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - since gas fees on Arbitrum are so low, you should only need a small amount, e.g. 0.01 ETH will probably be more than enough.
-If a subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred subgraph.
+If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph.
-When looking at the subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool.
+When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool.

@@ -162,4 +162,4 @@ In most cases, this step will auto-execute as the L2 gas included in step 1 shou
## Withdrawing your curation on L1
-If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address.
+If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address.
diff --git a/website/src/pages/ar/archived/sunrise.mdx b/website/src/pages/ar/archived/sunrise.mdx
index eb18a93c506c..71262f22e7d8 100644
--- a/website/src/pages/ar/archived/sunrise.mdx
+++ b/website/src/pages/ar/archived/sunrise.mdx
@@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ
## What was the Sunrise of Decentralized Data?
-The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled subgraph developers to upgrade to The Graph’s decentralized network seamlessly.
+The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly.
-This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs.
+This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs.
### What happened to the hosted service?
-The hosted service query endpoints are no longer available, and developers cannot deploy new subgraphs on the hosted service.
+The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service.
-During the upgrade process, owners of hosted service subgraphs could upgrade their subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded subgraphs.
+During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs.
### Was Subgraph Studio impacted by this upgrade?
No, Subgraph Studio was not impacted by Sunrise. Subgraphs were immediately available for querying, powered by the upgrade Indexer, which uses the same infrastructure as the hosted service.
-### Why were subgraphs published to Arbitrum, did it start indexing a different network?
+### Why were Subgraphs published to Arbitrum, did it start indexing a different network?
-The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/)
+The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/)
## About the Upgrade Indexer
> The upgrade Indexer is currently active.
-The upgrade Indexer was implemented to improve the experience of upgrading subgraphs from the hosted service to The Graph Network and support new versions of existing subgraphs that had not yet been indexed.
+The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed.
### What does the upgrade Indexer do?
-- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a subgraph is published.
+- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published.
- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/).
-- Indexers that operate an upgrade Indexer do so as a public service to support new subgraphs and additional chains that lack indexing rewards before The Graph Council approves them.
+- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them.
### Why is Edge & Node running the upgrade Indexer?
-Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service subgraphs.
+Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs.
### What does the upgrade indexer mean for existing Indexers?
Chains previously only supported on the hosted service were made available to developers on The Graph Network without indexing rewards at first.
-However, this action unlocked query fees for any interested Indexer and increased the number of subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain.
+However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain.
-The upgrade Indexer also provides the Indexer community with information about the potential demand for subgraphs and new chains on The Graph Network.
+The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network.
### What does this mean for Delegators?
-The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity.
+The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity.
### Did the upgrade Indexer compete with existing Indexers for rewards?
-No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards.
+No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards.
-It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs.
+It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs.
-### How does this affect subgraph developers?
+### How does this affect Subgraph developers?
-Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade.
+Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade.
### How does the upgrade Indexer benefit data consumers?
@@ -71,10 +71,10 @@ The upgrade Indexer enables chains on the network that were previously only supp
The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market.
-### When will the upgrade Indexer stop supporting a subgraph?
+### When will the upgrade Indexer stop supporting a Subgraph?
-The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it.
+The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it.
-Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days.
+Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days.
-Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it.
+Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it.
diff --git a/website/src/pages/ar/global.json b/website/src/pages/ar/global.json
index b543fd624f0e..d9110259f5cb 100644
--- a/website/src/pages/ar/global.json
+++ b/website/src/pages/ar/global.json
@@ -6,6 +6,7 @@
"subgraphs": "Subgraphs",
"substreams": "متعدد-السلاسل",
"sps": "Substreams-Powered Subgraphs",
+ "tokenApi": "Token API",
"indexing": "Indexing",
"resources": "Resources",
"archived": "Archived"
@@ -24,9 +25,51 @@
"linkToThisSection": "Link to this section"
},
"content": {
- "note": "Note",
+ "callout": {
+ "note": "Note",
+ "tip": "Tip",
+ "important": "Important",
+ "warning": "Warning",
+ "caution": "Caution"
+ },
"video": "Video"
},
+ "openApi": {
+ "parameters": {
+ "pathParameters": "Path Parameters",
+ "queryParameters": "Query Parameters",
+ "headerParameters": "Header Parameters",
+ "cookieParameters": "Cookie Parameters",
+ "parameter": "Parameter",
+ "description": "الوصف",
+ "value": "Value",
+ "required": "Required",
+ "deprecated": "Deprecated",
+ "defaultValue": "Default value",
+ "minimumValue": "Minimum value",
+ "maximumValue": "Maximum value",
+ "acceptedValues": "Accepted values",
+ "acceptedPattern": "Accepted pattern",
+ "format": "Format",
+ "serializationFormat": "Serialization format"
+ },
+ "request": {
+ "label": "Test this endpoint",
+ "noCredentialsRequired": "No credentials required",
+ "send": "Send Request"
+ },
+ "responses": {
+ "potentialResponses": "Potential Responses",
+ "status": "Status",
+ "description": "الوصف",
+ "liveResponse": "Live Response",
+ "example": "Example"
+ },
+ "errors": {
+ "invalidApi": "Could not retrieve API {0}.",
+ "invalidOperation": "Could not retrieve operation {0} in API {1}."
+ }
+ },
"notFound": {
"title": "Oops! This page was lost in space...",
"subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.",
diff --git a/website/src/pages/ar/index.json b/website/src/pages/ar/index.json
index 0f2dfc58967a..2443372843a8 100644
--- a/website/src/pages/ar/index.json
+++ b/website/src/pages/ar/index.json
@@ -7,7 +7,7 @@
"cta2": "Build your first subgraph"
},
"products": {
- "title": "The Graph’s Products",
+ "title": "The Graph's Products",
"description": "Choose a solution that fits your needs—interact with blockchain data your way.",
"subgraphs": {
"title": "Subgraphs",
@@ -21,7 +21,7 @@
},
"sps": {
"title": "Substreams-Powered Subgraphs",
- "description": "Boost your subgraph’s efficiency and scalability by using Substreams.",
+ "description": "Boost your subgraph's efficiency and scalability by using Substreams.",
"cta": "Set up a Substreams-powered subgraph"
},
"graphNode": {
@@ -37,10 +37,86 @@
},
"supportedNetworks": {
"title": "الشبكات المدعومة",
+ "details": "Network Details",
+ "services": "Services",
+ "type": "النوع",
+ "protocol": "Protocol",
+ "identifier": "Identifier",
+ "chainId": "Chain ID",
+ "nativeCurrency": "Native Currency",
+ "docs": "التوثيق",
+ "shortName": "Short Name",
+ "guides": "Guides",
+ "search": "Search networks",
+ "showTestnets": "Show Testnets",
+ "loading": "Loading...",
+ "infoTitle": "Info",
+ "infoText": "Boost your developer experience by enabling The Graph's indexing network.",
+ "infoLink": "Integrate new network",
"description": {
"base": "The Graph supports {0}. To add a new network, {1}",
"networks": "networks",
"completeThisForm": "complete this form"
+ },
+ "emptySearch": {
+ "title": "No networks found",
+ "description": "No networks match your search for \"{0}\"",
+ "clearSearch": "Clear search",
+ "showTestnets": "Show testnets"
+ },
+ "tableHeaders": {
+ "name": "Name",
+ "id": "ID",
+ "subgraphs": "Subgraphs",
+ "substreams": "متعدد-السلاسل",
+ "firehose": "Firehose",
+ "tokenapi": "Token API"
+ }
+ },
+ "networkGuides": {
+ "evm": {
+ "subgraphQuickStart": {
+ "title": "Subgraph quick start",
+ "description": "Kickstart your journey into subgraph development."
+ },
+ "substreams": {
+ "title": "متعدد-السلاسل",
+ "description": "Stream high-speed data for real-time indexing."
+ },
+ "timeseries": {
+ "title": "Timeseries & Aggregations",
+ "description": "Learn to track metrics like daily volumes or user growth."
+ },
+ "advancedFeatures": {
+ "title": "Advanced subgraph features",
+ "description": "Leverage features like custom data sources, event handlers, and topic filters."
+ },
+ "billing": {
+ "title": "الفوترة",
+ "description": "Optimize costs and manage billing efficiently."
+ }
+ },
+ "nonEvm": {
+ "officialDocs": {
+ "title": "Official Substreams docs",
+ "description": "Stream high-speed data for real-time indexing."
+ },
+ "spsIntro": {
+ "title": "Substreams-powered Subgraphs Intro",
+ "description": "Supercharge your subgraph's efficiency with Substreams."
+ },
+ "substreamsDev": {
+ "title": "Substreams.dev",
+ "description": "Access tutorials, templates, and documentation to build custom data modules."
+ },
+ "substreamsStarter": {
+ "title": "Substreams starter",
+ "description": "Leverage this boilerplate to create your first Substreams module."
+ },
+ "substreamsRepo": {
+ "title": "Substreams repo",
+ "description": "Study, contribute to, or customize the core Substreams framework."
+ }
}
},
"guides": {
@@ -80,15 +156,15 @@
"watchOnYouTube": "Watch on YouTube",
"theGraphExplained": {
"title": "The Graph Explained In 1 Minute",
- "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video."
+ "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video."
},
"whatIsDelegating": {
"title": "What is Delegating?",
- "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating."
+ "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph."
},
"howToIndexSolana": {
"title": "How to Index Solana with a Substreams-powered Subgraph",
- "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph."
+ "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases."
}
},
"time": {
diff --git a/website/src/pages/ar/indexing/chain-integration-overview.mdx b/website/src/pages/ar/indexing/chain-integration-overview.mdx
index e6b95ec0fc17..af9a582b58d3 100644
--- a/website/src/pages/ar/indexing/chain-integration-overview.mdx
+++ b/website/src/pages/ar/indexing/chain-integration-overview.mdx
@@ -36,7 +36,7 @@ Ready to shape the future of The Graph Network? [Start your proposal](https://gi
### 2. ماذا يحدث إذا تم دعم فايرهوز و سبستريمز بعد أن تم دعم الشبكة على الشبكة الرئيسية؟
-هذا سيؤثر فقط على دعم البروتوكول لمكافآت الفهرسة على الغرافات الفرعية المدعومة من سبستريمز. تنفيذ الفايرهوز الجديد سيحتاج إلى الفحص على شبكة الاختبار، وفقًا للمنهجية الموضحة للمرحلة الثانية في هذا المقترح لتحسين الغراف. وعلى نحو مماثل، وعلى افتراض أن التنفيذ فعال وموثوق به، سيتتطالب إنشاء طلب سحب على [مصفوفة دعم الميزات] (https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) ("مصادر بيانات سبستريمز" ميزة للغراف الفرعي)، بالإضافة إلى مقترح جديد لتحسين الغراف، لدعم البروتوكول لمكافآت الفهرسة. يمكن لأي شخص إنشاء طلب السحب ومقترح تحسين الغراف؛ وسوف تساعد المؤسسة في الحصول على موافقة المجلس.
+This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval.
### 3. How much time will the process of reaching full protocol support take?
diff --git a/website/src/pages/ar/indexing/new-chain-integration.mdx b/website/src/pages/ar/indexing/new-chain-integration.mdx
index bff012725d9d..bcd82dafed18 100644
--- a/website/src/pages/ar/indexing/new-chain-integration.mdx
+++ b/website/src/pages/ar/indexing/new-chain-integration.mdx
@@ -2,7 +2,7 @@
title: New Chain Integration
---
-Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies:
+Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies:
1. **EVM JSON-RPC**
2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms.
@@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex
- `eth_getBlockByHash`
- `net_version`
- `eth_getTransactionReceipt`، ضمن طلب دفعة استدعاء الإجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت
-- `trace_filter` *(limited tracing and optionally required for Graph Node)*
+- `trace_filter` _(limited tracing and optionally required for Graph Node)_
### 2. Firehose Integration
@@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through
## EVM considerations - Difference between JSON-RPC & Firehose
-While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing.
+While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing.
-- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes.
+- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes.
-> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers)
+> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers)
## تكوين عقدة الغراف
-Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph.
+Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph.
1. [استنسخ عقدة الغراف](https://github.com/graphprotocol/graph-node)
@@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your
## Substreams-powered Subgraphs
-For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself.
+For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself.
diff --git a/website/src/pages/ar/indexing/overview.mdx b/website/src/pages/ar/indexing/overview.mdx
index 3bfd1cc210c3..f543bca55f32 100644
--- a/website/src/pages/ar/indexing/overview.mdx
+++ b/website/src/pages/ar/indexing/overview.mdx
@@ -7,7 +7,7 @@ Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) i
GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers also earn rewards for delegated stake from Delegators, to contribute to the network.
-يختار المفهرسون subgraphs للقيام بالفهرسة بناء على إشارة تنسيق subgraphs ، حيث أن المنسقون يقومون ب staking ل GRT وذلك للإشارة ل Subgraphs عالية الجودة. يمكن أيضا للعملاء (مثل التطبيقات) تعيين بارامترات حيث يقوم المفهرسون بمعالجة الاستعلامات ل Subgraphs وتسعير رسوم الاستعلام.
+Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing.
## FAQ
@@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT.
**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity.
-**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network.
+**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network.
### How are indexing rewards distributed?
-Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.**
+Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.**
Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack.
### What is a proof of indexing (POI)?
-POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block.
+POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block.
### When are indexing rewards distributed?
@@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap
Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps:
-1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations:
+1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations:
```graphql
query indexerAllocations {
@@ -91,31 +91,31 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that
- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%.
-### How do Indexers know which subgraphs to index?
+### How do Indexers know which Subgraphs to index?
-Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network:
+Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network:
-- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up.
+- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up.
-- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand.
+- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand.
-- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply.
+- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply.
-- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards.
+- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards.
### What are the hardware requirements?
-- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded.
+- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded.
- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests.
-- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second.
-- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic.
+- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second.
+- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic.
-| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
-| --- | :-: | :-: | :-: | :-: | :-: |
-| Small | 4 | 8 | 1 | 4 | 16 |
-| Standard | 8 | 30 | 1 | 12 | 48 |
-| Medium | 16 | 64 | 2 | 32 | 64 |
-| Large | 72 | 468 | 3.5 | 48 | 184 |
+| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
+| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: |
+| Small | 4 | 8 | 1 | 4 | 16 |
+| Standard | 8 | 30 | 1 | 12 | 48 |
+| Medium | 16 | 64 | 2 | 32 | 64 |
+| Large | 72 | 468 | 3.5 | 48 | 184 |
### What are some basic security precautions an Indexer should take?
@@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making
## Infrastructure
-At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network.
+At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network.
-- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions.
+- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions.
-- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
+- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
-- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway.
-- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations.
+- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations.
- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server.
@@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer
#### Graph Node
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Service
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP server (for paid subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
-| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- |
+| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
+| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Agent
@@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`.
### Graph Node
-[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint.
+[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint.
#### Getting started from source
@@ -365,9 +365,9 @@ docker-compose up
To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components:
-- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each.
+- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each.
-- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways.
+- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways.
- **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules.
@@ -525,7 +525,7 @@ graph indexer status
#### Indexer management using Indexer CLI
-The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution.
+The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution.
#### Usage
@@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar
- `graph indexer rules set [options] ...` - Set one or more indexing rules.
-- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed.
+- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed.
- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index.
@@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported
#### Indexing rules
-Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing.
+Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing.
-For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`.
+For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`.
Data model:
@@ -679,7 +679,7 @@ graph indexer actions execute approve
Note that supported action types for allocation management have different input requirements:
-- `Allocate` - allocate stake to a specific subgraph deployment
+- `Allocate` - allocate stake to a specific Subgraph deployment
- required action params:
- deploymentID
@@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input
- poi
- force (forces using the provided POI even if it doesn’t match what the graph-node provides)
-- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment
+- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment
- required action params:
- allocationID
@@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input
#### Cost models
-Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
+Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
#### Agora
@@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi
6. Call `stake()` to stake GRT in the protocol.
-7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address.
+7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address.
-8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks.
+8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks.
```
setDelegationParameters(950000, 600000, 500)
@@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st
After being created by an Indexer a healthy allocation goes through two states.
-- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules.
+- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules.
- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)).
-Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically.
+Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically.
diff --git a/website/src/pages/ar/indexing/supported-network-requirements.mdx b/website/src/pages/ar/indexing/supported-network-requirements.mdx
index 9c820d055399..d2364e39c668 100644
--- a/website/src/pages/ar/indexing/supported-network-requirements.mdx
+++ b/website/src/pages/ar/indexing/supported-network-requirements.mdx
@@ -2,17 +2,17 @@
title: Supported Network Requirements
---
-| Network | Guides | System Requirements | Indexing Rewards |
-| --- | --- | --- | :-: |
-| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal) [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU Ubuntu 22.04 16GB+ RAM >= 8 TiB NVMe SSD _last updated August 2023_ | ✅ |
-| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU Ubuntu 22.04 16GB+ RAM >= 5 TiB NVMe SSD _last updated August 2023_ | ✅ |
-| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)
[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal) [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU Ubuntu 22.04 16GB+ RAM >= 8 TiB NVMe SSD _last updated August 2023_ | ✅ |
+| بوليجون | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU Ubuntu 22.04 32GB+ RAM >= 10 TiB NVMe SSD _last updated August 2023_ | ✅ |
+| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal) [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU Debian 12 16GB+ RAM >= 1 TiB NVMe SSD _last updated 3rd April 2024_ | ✅ |
diff --git a/website/src/pages/ar/indexing/tap.mdx b/website/src/pages/ar/indexing/tap.mdx
index ee96a02cd5b8..e7085e5680bb 100644
--- a/website/src/pages/ar/indexing/tap.mdx
+++ b/website/src/pages/ar/indexing/tap.mdx
@@ -1,21 +1,21 @@
---
-title: TAP Migration Guide
+title: GraphTally Guide
---
-Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust.
+Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust.
## نظره عامة
-[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features:
+GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features:
- Efficiently handles micropayments.
- Adds a layer of consolidations to onchain transactions and costs.
- Allows Indexers control of receipts and payments, guaranteeing payment for queries.
- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders.
-## Specifics
+### Specifics
-TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process.
+GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process.
For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value.
@@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed
| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` |
| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` |
-### Requirements
+### Prerequisites
-In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`.
+In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`.
-- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD)
-- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1)
+- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD)
+- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1)
-> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually.
+> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually.
## Migration Guide
@@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc
1. **Indexer Agent**
- Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components).
- - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs.
+ - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs.
2. **Indexer Service**
@@ -128,18 +128,18 @@ query_url = ""
status_url = ""
[subgraphs.network]
-# Query URL for the Graph Network subgraph.
+# Query URL for the Graph Network Subgraph.
query_url = ""
# Optional, deployment to look for in the local `graph-node`, if locally indexed.
-# Locally indexing the subgraph is recommended.
+# Locally indexing the Subgraph is recommended.
# NOTE: Use `query_url` or `deployment_id` only
deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
[subgraphs.escrow]
-# Query URL for the Escrow subgraph.
+# Query URL for the Escrow Subgraph.
query_url = ""
# Optional, deployment to look for in the local `graph-node`, if locally indexed.
-# Locally indexing the subgraph is recommended.
+# Locally indexing the Subgraph is recommended.
# NOTE: Use `query_url` or `deployment_id` only
deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
diff --git a/website/src/pages/ar/indexing/tooling/graph-node.mdx b/website/src/pages/ar/indexing/tooling/graph-node.mdx
index 0250f14a3d08..f5778789213d 100644
--- a/website/src/pages/ar/indexing/tooling/graph-node.mdx
+++ b/website/src/pages/ar/indexing/tooling/graph-node.mdx
@@ -2,31 +2,31 @@
title: Graph Node
---
-Graph Node is the component which indexes subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer.
+Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer.
This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node).
## Graph Node
-[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query.
+[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query.
Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node).
### PostgreSQL database
-The main store for the Graph Node, this is where subgraph data is stored, as well as metadata about subgraphs, and subgraph-agnostic network data such as the block cache, and eth_call cache.
+The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache.
### Network clients
In order to index a network, Graph Node needs access to a network client via an EVM-compatible JSON-RPC API. This RPC may connect to a single client or it could be a more complex setup that load balances across multiple.
-While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)).
+While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)).
**Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/).
### IPFS Nodes
-Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
### Prometheus metrics server
@@ -77,19 +77,19 @@ A complete Kubernetes example configuration can be found in the [indexer reposit
When it is running Graph Node exposes the following ports:
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint.
## Advanced Graph Node configuration
-At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the subgraphs to be indexed.
+At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed.
This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables.
@@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https:
#### Multiple Graph Nodes
-Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules).
+Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules).
> Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding.
#### Deployment rules
-Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision.
+Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision.
Example deployment rule configuration:
@@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ]
match = { network = [ "xdai", "poa-core" ] }
indexers = [ "index_node_other_0" ]
[[deployment.rule]]
-# There's no 'match', so any subgraph matches
+# There's no 'match', so any Subgraph matches
shards = [ "sharda", "shardb" ]
indexers = [
"index_node_community_0",
@@ -167,11 +167,11 @@ Any node whose --node-id matches the regular expression will be set up to only r
For most use cases, a single Postgres database is sufficient to support a graph-node instance. When a graph-node instance outgrows a single Postgres database, it is possible to split the storage of graph-node's data across multiple Postgres databases. All databases together form the store of the graph-node instance. Each individual database is called a shard.
-Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed.
+Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed.
Sharding becomes useful when your existing database can't keep up with the load that Graph Node puts on it, and when it's not possible to increase the database size anymore.
-> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between subgraphs; in those situations it can help dramatically if the high-volume subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume subgraphs.
+> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs.
In terms of configuring connections, start with max_connections in postgresql.conf set to 400 (or maybe even 200) and look at the store_connection_wait_time_ms and store_connection_checkout_count Prometheus metrics. Noticeable wait times (anything above 5ms) is an indication that there are too few connections available; high wait times there will also be caused by the database being very busy (like high CPU load). However if the database seems otherwise stable, high wait times indicate a need to increase the number of connections. In the configuration, how many connections each graph-node instance can use is an upper limit, and Graph Node will not keep connections open if it doesn't need them.
@@ -188,7 +188,7 @@ ingestor = "block_ingestor_node"
#### Supporting multiple networks
-The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of:
+The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of:
- Multiple networks
- Multiple providers per network (this can allow splitting of load across providers, and can also allow for configuration of full nodes as well as archive nodes, with Graph Node preferring cheaper providers if a given workload allows).
@@ -225,11 +225,11 @@ Users who are operating a scaled indexing setup with advanced configuration may
### Managing Graph Node
-Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing subgraphs.
+Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs.
#### Logging
-Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace.
+Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace.
In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs).
@@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker
Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs`
-### Working with subgraphs
+### Working with Subgraphs
#### Indexing status API
-Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different subgraphs, checking proofs of indexing, inspecting subgraph features and more.
+Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more.
The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql).
@@ -263,7 +263,7 @@ There are three separate parts of the indexing process:
- Processing events in order with the appropriate handlers (this can involve calling the chain for state, and fetching data from the store)
- Writing the resulting data to the store
-These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where subgraphs are slow to index, the underlying cause will depend on the specific subgraph.
+These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph.
Common causes of indexing slowness:
@@ -276,24 +276,24 @@ Common causes of indexing slowness:
- The provider itself falling behind the chain head
- Slowness in fetching new receipts at the chain head from the provider
-Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance.
+Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance.
-#### Failed subgraphs
+#### Failed Subgraphs
-During indexing subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure:
+During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure:
- Deterministic failures: these are failures which will not be resolved with retries
- Non-deterministic failures: these might be down to issues with the provider, or some unexpected Graph Node error. When a non-deterministic failure occurs, Graph Node will retry the failing handlers, backing off over time.
-In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the subgraph code is required.
+In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required.
-> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository.
+> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository.
#### Block and call cache
-Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph.
+Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph.
-However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider.
+However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider.
If a block cache inconsistency is suspected, such as a tx receipt missing event:
@@ -304,7 +304,7 @@ If a block cache inconsistency is suspected, such as a tx receipt missing event:
#### Querying issues and errors
-Once a subgraph has been indexed, indexers can expect to serve queries via the subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process.
+Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process.
However, even with a dedicated query node and replicas, certain queries can take a long time to execute, and in some cases increase memory usage and negatively impact the query time for other users.
@@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat
##### Analysing queries
-Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that subgraph or query. And then of course to resolve it, if possible.
+Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible.
In other cases, the trigger might be high memory usage on a query node, in which case the challenge is first to identify the query causing the issue.
@@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the
Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again.
-For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load.
+For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load.
-#### Removing subgraphs
+#### Removing Subgraphs
> This is new functionality, which will be available in Graph Node 0.29.x
-At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
+At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
diff --git a/website/src/pages/ar/indexing/tooling/graphcast.mdx b/website/src/pages/ar/indexing/tooling/graphcast.mdx
index 8fc00976ec28..d084edcd7067 100644
--- a/website/src/pages/ar/indexing/tooling/graphcast.mdx
+++ b/website/src/pages/ar/indexing/tooling/graphcast.mdx
@@ -10,10 +10,10 @@ Currently, the cost to broadcast information to other network participants is de
The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases:
-- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)).
-- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers.
-- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc.
-- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc.
+- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)).
+- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers.
+- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc.
+- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc.
- Self-reporting on stack information including graph-node version, Postgres version, Ethereum client version, etc.
### Learn More
diff --git a/website/src/pages/ar/resources/benefits.mdx b/website/src/pages/ar/resources/benefits.mdx
index 2e1a0834591c..6899e348a912 100644
--- a/website/src/pages/ar/resources/benefits.mdx
+++ b/website/src/pages/ar/resources/benefits.mdx
@@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar
## Low Volume User (less than 100,000 queries per month)
-| Cost Comparison | Self Hosted | The Graph Network |
-| :-: | :-: | :-: |
-| Monthly server cost\* | $350 per month | $0 |
-| Query costs | $0+ | $0 per month |
-| Engineering time† | $400 per month | None, built into the network with globally distributed Indexers |
-| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) |
-| Cost per query | $0 | $0‡ |
-| Infrastructure | Centralized | Decentralized |
-| Geographic redundancy | $750+ per additional node | Included |
-| Uptime | Varies | 99.9%+ |
-| Total Monthly Costs | $750+ | $0 |
+| Cost Comparison | Self Hosted | The Graph Network |
+| :--------------------------: | :-------------------------------------: | :-------------------------------------------------------------: |
+| Monthly server cost\* | $350 per month | $0 |
+| Query costs | $0+ | $0 per month |
+| Engineering time† | $400 per month | None, built into the network with globally distributed Indexers |
+| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) |
+| Cost per query | $0 | $0‡ |
+| Infrastructure | Centralized | Decentralized |
+| Geographic redundancy | $750+ per additional node | Included |
+| Uptime | Varies | 99.9%+ |
+| Total Monthly Costs | $750+ | $0 |
## Medium Volume User (~3M queries per month)
-| Cost Comparison | Self Hosted | The Graph Network |
-| :-: | :-: | :-: |
-| Monthly server cost\* | $350 per month | $0 |
-| Query costs | $500 per month | $120 per month |
-| Engineering time† | $800 per month | None, built into the network with globally distributed Indexers |
-| Queries per month | Limited to infra capabilities | ~3,000,000 |
-| Cost per query | $0 | $0.00004 |
-| Infrastructure | Centralized | Decentralized |
-| Engineering expense | $200 per hour | Included |
-| Geographic redundancy | $1,200 in total costs per additional node | Included |
-| Uptime | Varies | 99.9%+ |
-| Total Monthly Costs | $1,650+ | $120 |
+| Cost Comparison | Self Hosted | The Graph Network |
+| :--------------------------: | :----------------------------------------: | :-------------------------------------------------------------: |
+| Monthly server cost\* | $350 per month | $0 |
+| Query costs | $500 per month | $120 per month |
+| Engineering time† | $800 per month | None, built into the network with globally distributed Indexers |
+| Queries per month | Limited to infra capabilities | ~3,000,000 |
+| Cost per query | $0 | $0.00004 |
+| Infrastructure | Centralized | Decentralized |
+| Engineering expense | $200 per hour | Included |
+| Geographic redundancy | $1,200 in total costs per additional node | Included |
+| Uptime | Varies | 99.9%+ |
+| Total Monthly Costs | $1,650+ | $120 |
## High Volume User (~30M queries per month)
-| Cost Comparison | Self Hosted | The Graph Network |
-| :-: | :-: | :-: |
-| Monthly server cost\* | $1100 per month, per node | $0 |
-| Query costs | $4000 | $1,200 per month |
-| Number of nodes needed | 10 | Not applicable |
-| Engineering time† | $6,000 or more per month | None, built into the network with globally distributed Indexers |
-| Queries per month | Limited to infra capabilities | ~30,000,000 |
-| Cost per query | $0 | $0.00004 |
-| Infrastructure | Centralized | Decentralized |
-| Geographic redundancy | $1,200 in total costs per additional node | Included |
-| Uptime | Varies | 99.9%+ |
-| Total Monthly Costs | $11,000+ | $1,200 |
+| Cost Comparison | Self Hosted | The Graph Network |
+| :--------------------------: | :-----------------------------------------: | :-------------------------------------------------------------: |
+| Monthly server cost\* | $1100 per month, per node | $0 |
+| Query costs | $4000 | $1,200 per month |
+| Number of nodes needed | 10 | Not applicable |
+| Engineering time† | $6,000 or more per month | None, built into the network with globally distributed Indexers |
+| Queries per month | Limited to infra capabilities | ~30,000,000 |
+| Cost per query | $0 | $0.00004 |
+| Infrastructure | Centralized | Decentralized |
+| Geographic redundancy | $1,200 in total costs per additional node | Included |
+| Uptime | Varies | 99.9%+ |
+| Total Monthly Costs | $11,000+ | $1,200 |
\*including costs for backup: $50-$100 per month
@@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar
‡Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries.
-Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet.
+Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet.
-Curating signal on a subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a subgraph, and later withdrawn—with potential to earn returns in the process).
+Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process).
## No Setup Costs & Greater Operational Efficiency
@@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy
Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally.
-Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/).
+Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/).
diff --git a/website/src/pages/ar/resources/glossary.mdx b/website/src/pages/ar/resources/glossary.mdx
index f922950390a6..d456a94f63ab 100644
--- a/website/src/pages/ar/resources/glossary.mdx
+++ b/website/src/pages/ar/resources/glossary.mdx
@@ -4,51 +4,51 @@ title: قائمة المصطلحات
- **The Graph**: A decentralized protocol for indexing and querying data.
-- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer.
+- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer.
-- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs.
+- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
-- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network.
+- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network.
-- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone.
+- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone.
- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries.
- **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards.
- 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network.
+ 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network.
- 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually.
+ 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually.
- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit.
- **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake.
-- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers.
+- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers.
-- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs.
+- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs.
- **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned.
-- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph.
+- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph.
-- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned.
+- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned.
-- **Data Consumer**: Any application or user that queries a subgraph.
+- **Data Consumer**: Any application or user that queries a Subgraph.
-- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network.
+- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network.
-- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example.
+- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example.
- **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day.
-- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses:
+- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses:
- 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated.
+ 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated.
- 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data.
+ 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data.
-- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs.
+- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs.
- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide.
@@ -56,28 +56,28 @@ title: قائمة المصطلحات
- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned.
-- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT.
+- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT.
- **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT.
- **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network.
-- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer.
+- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer.
-- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer.
+- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer.
-- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations.
+- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations.
- **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way.
-- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol.
+- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol.
- **Graph CLI**: A command line interface tool for building and deploying to The Graph.
- **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again.
-- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake.
+- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake.
-- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings.
+- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings.
-- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2).
+- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2).
diff --git a/website/src/pages/ar/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/ar/resources/migration-guides/assemblyscript-migration-guide.mdx
index 9fe263f2f8b2..40086bb24579 100644
--- a/website/src/pages/ar/resources/migration-guides/assemblyscript-migration-guide.mdx
+++ b/website/src/pages/ar/resources/migration-guides/assemblyscript-migration-guide.mdx
@@ -2,13 +2,13 @@
title: دليل ترحيل AssemblyScript
---
-Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉
+Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉
-سيمكن ذلك لمطوري ال Subgraph من استخدام مميزات أحدث للغة AS والمكتبة القياسية.
+That will enable Subgraph developers to use newer features of the AS language and standard library.
This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂
-> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest.
+> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest.
## مميزات
@@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `
## كيف تقوم بالترقية؟
-1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`:
+1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`:
```yaml
...
@@ -52,7 +52,7 @@ dataSources:
...
mapping:
...
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
...
```
@@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null
maybeValue.aMethod()
```
-إذا لم تكن متأكدا من اختيارك ، فنحن نوصي دائما باستخدام الإصدار الآمن. إذا كانت القيمة غير موجودة ، فقد ترغب في القيام بعبارة if المبكرة مع قيمة راجعة في معالج الـ subgraph الخاص بك.
+If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler.
### Variable Shadowing
@@ -132,7 +132,7 @@ in assembly/index.ts(4,3)
### مقارانات Null
-من خلال إجراء الترقية على ال Subgraph الخاص بك ، قد تحصل أحيانًا على أخطاء مثل هذه:
+By doing the upgrade on your Subgraph, sometimes you might get errors like these:
```typescript
ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'.
@@ -330,7 +330,7 @@ let wrapper = new Wrapper(y)
wrapper.n = wrapper.n + x // doesn't give compile time errors as it should
```
-لقد فتحنا مشكلة في مترجم AssemblyScript ، ولكن في الوقت الحالي إذا أجريت هذا النوع من العمليات في Subgraph mappings ، فيجب عليك تغييرها لإجراء فحص ل null قبل ذلك.
+We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it.
```typescript
let wrapper = new Wrapper(y)
@@ -352,7 +352,7 @@ value.x = 10
value.y = 'content'
```
-سيتم تجميعها لكنها ستتوقف في وقت التشغيل ، وهذا يحدث لأن القيمة لم تتم تهيئتها ، لذا تأكد من أن ال subgraph قد قام بتهيئة قيمها ، على النحو التالي:
+It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this:
```typescript
var value = new Type() // initialized
diff --git a/website/src/pages/ar/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/ar/resources/migration-guides/graphql-validations-migration-guide.mdx
index 29fed533ef8c..ebed96df1002 100644
--- a/website/src/pages/ar/resources/migration-guides/graphql-validations-migration-guide.mdx
+++ b/website/src/pages/ar/resources/migration-guides/graphql-validations-migration-guide.mdx
@@ -20,7 +20,7 @@ To be compliant with those validations, please follow the migration guide.
You can use the CLI migration tool to find any issues in your GraphQL operations and fix them. Alternatively you can update the endpoint of your GraphQL client to use the `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME` endpoint. Testing your queries against this endpoint will help you find the issues in your queries.
-> Not all subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid.
+> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid.
## Migration CLI tool
diff --git a/website/src/pages/ar/resources/roles/curating.mdx b/website/src/pages/ar/resources/roles/curating.mdx
index d2f355055aac..e73785e92590 100644
--- a/website/src/pages/ar/resources/roles/curating.mdx
+++ b/website/src/pages/ar/resources/roles/curating.mdx
@@ -2,37 +2,37 @@
title: Curating
---
-Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index.
+Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index.
## What Does Signaling Mean for The Graph Network?
-Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed.
+Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed.
-Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives.
+Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives.
-Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them.
+Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them.
-The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
+The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
-When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
+When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
-If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with.
+If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with.
-Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below).
+Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below).
-
+
## كيفية الإشارة
-Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/)
+Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/)
-يمكن للمنسق الإشارة إلى إصدار معين ل subgraph ، أو يمكنه اختيار أن يتم ترحيل migrate إشاراتهم تلقائيا إلى أحدث إصدار لهذا ال subgraph. كلاهما استراتيجيات سليمة ولها إيجابيات وسلبيات.
+A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons.
-Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred.
+Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred.
Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares.
-> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy.
+> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy.
## Withdrawing your GRT
@@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time.
Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax).
-Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled.
+Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled.
-However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph.
+However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph.
## المخاطر
1. سوق الاستعلام يعتبر حديثا في The Graph وهناك خطر من أن يكون٪ APY الخاص بك أقل مما تتوقع بسبب ديناميكيات السوق الناشئة.
-2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned.
-3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/).
-4. يمكن أن يفشل ال subgraph بسبب خطأ. ال subgraph الفاشل لا يمكنه إنشاء رسوم استعلام. نتيجة لذلك ، سيتعين عليك الانتظار حتى يصلح المطور الخطأ وينشر إصدارا جديدا.
- - إذا كنت مشتركا في أحدث إصدار من subgraph ، فسيتم ترحيل migrate أسهمك تلقائيا إلى هذا الإصدار الجديد. هذا سيتحمل ضريبة تنسيق بنسبة 0.5٪.
- - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax.
+2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned.
+3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/).
+4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version.
+ - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax.
+ - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax.
## الأسئلة الشائعة حول التنسيق
### 1. ما هي النسبة المئوية لرسوم الاستعلام التي يكسبها المنسقون؟
-By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance.
+By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance.
-### 2. كيف يمكنني تقرير ما إذا كان ال subgraph عالي الجودة لكي أقوم بالإشارة إليه؟
+### 2. How do I decide which Subgraphs are high quality to signal on?
-Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result:
+Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result:
-- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future
-- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on.
+- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future
+- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on.
-### 3. What’s the cost of updating a subgraph?
+### 3. What’s the cost of updating a Subgraph?
-Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas.
+Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas.
-### 4. How often can I update my subgraph?
+### 4. How often can I update my Subgraph?
-It’s suggested that you don’t update your subgraphs too frequently. See the question above for more details.
+It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details.
### 5. هل يمكنني بيع أسهم التنسيق الخاصة بي؟
diff --git a/website/src/pages/ar/resources/roles/delegating/undelegating.mdx b/website/src/pages/ar/resources/roles/delegating/undelegating.mdx
index 274fd08e0269..0756092ea10e 100644
--- a/website/src/pages/ar/resources/roles/delegating/undelegating.mdx
+++ b/website/src/pages/ar/resources/roles/delegating/undelegating.mdx
@@ -13,13 +13,11 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the
1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio.
2. Click on your profile. You can find it on the top right corner of the page.
-
- Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead.
3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to.
4. Click on the Indexer from which you wish to withdraw your tokens.
-
- Make sure to note the specific Indexer, as you will need to find them again to withdraw.
5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below:
@@ -37,11 +35,9 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the
### Step-by-Step
1. Find your delegation transaction on Arbiscan.
-
- Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a)
2. Navigate to "Transaction Action" where you can find the staking extension contract:
-
- [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03)
3. Then click on "Contract". 
@@ -66,7 +62,7 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the
11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below:
- 
+ 
## مصادر إضافية
diff --git a/website/src/pages/ar/resources/subgraph-studio-faq.mdx b/website/src/pages/ar/resources/subgraph-studio-faq.mdx
index 74c0228e4093..ec613ed68df2 100644
--- a/website/src/pages/ar/resources/subgraph-studio-faq.mdx
+++ b/website/src/pages/ar/resources/subgraph-studio-faq.mdx
@@ -4,7 +4,7 @@ title: الأسئلة الشائعة حول الفرعيةرسم بياني اس
## 1. What is Subgraph Studio?
-[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys.
+[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys.
## 2. How do I create an API Key?
@@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th
After creating an API Key, in the Security section, you can define the domains that can query a specific API Key.
-## 5. Can I transfer my subgraph to another owner?
+## 5. Can I transfer my Subgraph to another owner?
-Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'.
+Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'.
-Note that you will no longer be able to see or edit the subgraph in Studio once it has been transferred.
+Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred.
-## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use?
+## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use?
-You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio.
+You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio.
-تذكر أنه يمكنك إنشاء API key والاستعلام عن أي subgraph منشور على الشبكة ، حتى إذا قمت ببناء subgraph بنفسك. حيث أن الاستعلامات عبر API key الجديد ، هي استعلامات مدفوعة مثل أي استعلامات أخرى على الشبكة.
+Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network.
diff --git a/website/src/pages/ar/resources/tokenomics.mdx b/website/src/pages/ar/resources/tokenomics.mdx
index 511af057534f..fa0f098b22c8 100644
--- a/website/src/pages/ar/resources/tokenomics.mdx
+++ b/website/src/pages/ar/resources/tokenomics.mdx
@@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s
## نظره عامة
-The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph.
+The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph.
## Specifics
@@ -24,9 +24,9 @@ There are four primary network participants:
1. Delegators - Delegate GRT to Indexers & secure the network
-2. المنسقون (Curators) - يبحثون عن أفضل subgraphs للمفهرسين
+2. Curators - Find the best Subgraphs for Indexers
-3. Developers - Build & query subgraphs
+3. Developers - Build & query Subgraphs
4. المفهرسون (Indexers) - العمود الفقري لبيانات blockchain
@@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth
## Delegators (Passively earn GRT)
-Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually.
+Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually.
For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually.
@@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head
## Curators (Earn GRT)
-Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed.
+Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed.
-Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation.
+Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation.
-Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT.
+Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT.
## Developers
-Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants.
+Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants.
-### إنشاء subgraph
+### Creating a Subgraph
-Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers.
+Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers.
-Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network.
+Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network.
-### الاستعلام عن subgraph موجود
+### Querying an existing Subgraph
-Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph.
+Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph.
Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol.
@@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th
## Indexers (Earn GRT)
-Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs.
+Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs.
Indexers can earn GRT rewards in two ways:
-1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)).
+1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)).
-2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately.
+2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately.
-Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph.
+Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph.
In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve.
-Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network.
+Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network.
The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors.
## Token Supply: Burning & Issuance
-The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network.
+The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network.
-The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data.
+The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data.

diff --git a/website/src/pages/ar/sps/introduction.mdx b/website/src/pages/ar/sps/introduction.mdx
index 2336653c0e06..e74abf2f0998 100644
--- a/website/src/pages/ar/sps/introduction.mdx
+++ b/website/src/pages/ar/sps/introduction.mdx
@@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs
sidebarTitle: مقدمة
---
-Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data.
+Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data.
## نظره عامة
-Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks.
+Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks.
### Specifics
There are two methods of enabling this technology:
-1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph.
+1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph.
-2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities.
+2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities.
-You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node.
+You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node.
### مصادر إضافية
@@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil
- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
diff --git a/website/src/pages/ar/sps/sps-faq.mdx b/website/src/pages/ar/sps/sps-faq.mdx
index 88f4ddbb66d7..c19b0a950297 100644
--- a/website/src/pages/ar/sps/sps-faq.mdx
+++ b/website/src/pages/ar/sps/sps-faq.mdx
@@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi
Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams.
-## ما هي الغرافات الفرعية المدعومة بسبستريمز؟
+## What are Substreams-powered Subgraphs?
-[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities.
+[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities.
-If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API.
+If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API.
-## كيف تختلف الغرافات الفرعية التي تعمل بسبستريمز عن الغرافات الفرعية؟
+## How are Substreams-powered Subgraphs different from Subgraphs?
Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain.
-By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times.
+By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times.
-## ما هي فوائد استخدام الغرافات الفرعية المدعومة بسبستريمز؟
+## What are the benefits of using Substreams-powered Subgraphs?
-Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka.
+Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka.
## ماهي فوائد سبستريمز؟
@@ -35,7 +35,7 @@ There are many benefits to using Substreams, including:
- High-performance indexing: Orders of magnitude faster indexing through large-scale clusters of parallel operations (think BigQuery).
-- التوجيه لأي مكان: يمكنك توجيه بياناتك لأي مكان ترغب فيه: بوستجريسكيو، مونغو دي بي، كافكا، الغرافات الفرعية، الملفات المسطحة، جداول جوجل.
+- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets.
- Programmable: Use code to customize extraction, do transformation-time aggregations, and model your output for multiple sinks.
@@ -63,17 +63,17 @@ There are many benefits to using Firehose, including:
- يستفيد من الملفات المسطحة: يتم استخراج بيانات سلسلة الكتل إلى ملفات مسطحة، وهي أرخص وأكثر موارد الحوسبة تحسيناً.
-## أين يمكن للمطورين الوصول إلى مزيد من المعلومات حول الغرافات الفرعية المدعومة بسبستريمز و سبستريمز؟
+## Where can developers access more information about Substreams-powered Subgraphs and Substreams?
The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules.
-The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph.
+The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph.
The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code.
## What is the role of Rust modules in Substreams?
-تعتبر وحدات رست مكافئة لمعينات أسمبلي اسكريبت في الغرافات الفرعية. يتم ترجمتها إلى ويب أسيمبلي بنفس الطريقة، ولكن النموذج البرمجي يسمح بالتنفيذ الموازي. تحدد وحدات رست نوع التحويلات والتجميعات التي ترغب في تطبيقها على بيانات سلاسل الكتل الخام.
+Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data.
See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details.
@@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst
When using Substreams, the composition happens at the transformation layer enabling cached modules to be re-used.
-على سبيل المثال، يمكن لأحمد بناء وحدة أسعار اسواق الصرف اللامركزية، ويمكن لإبراهيم استخدامها لبناء مجمِّع حجم للتوكن المهتم بها، ويمكن لآدم دمج أربع وحدات أسعار ديكس فردية لإنشاء مورد أسعار. سيقوم طلب واحد من سبستريمز بتجميع جميع هذه الوحدات الفردية، وربطها معًا لتقديم تدفق بيانات أكثر تطوراً ودقة. يمكن استخدام هذا التدفق لملءغراف فرعي ويمكن الاستعلام عنه من قبل المستخدمين.
+As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers.
## كيف يمكنك إنشاء ونشر غراف فرعي مدعوم بسبستريمز؟
After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/).
-## أين يمكنني العثور على أمثلة على سبستريمز والغرافات الفرعية المدعومة بسبستريمز؟
+## Where can I find examples of Substreams and Substreams-powered Subgraphs?
-يمكنك زيارة [جيت هب](https://github.com/pinax-network/awesome-substreams) للعثور على أمثلة للسبستريمز والغرافات الفرعية المدعومة بسبستريمز.
+You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs.
-## ماذا تعني السبستريمز والغرافات الفرعية المدعومة بسبستريمز بالنسبة لشبكة الغراف؟
+## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network?
إن التكامل مع سبستريمز والغرافات الفرعية المدعومة بسبستريمز واعدة بالعديد من الفوائد، بما في ذلك عمليات فهرسة عالية الأداء وقابلية أكبر للتركيبية من خلال استخدام وحدات المجتمع والبناء عليها.
diff --git a/website/src/pages/ar/sps/triggers.mdx b/website/src/pages/ar/sps/triggers.mdx
index 05eccf4d55fb..1bf1a2cf3f51 100644
--- a/website/src/pages/ar/sps/triggers.mdx
+++ b/website/src/pages/ar/sps/triggers.mdx
@@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL.
## نظره عامة
-Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer.
+Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer.
-By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework.
+By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework.
### Defining `handleTransactions`
-The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created.
+The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created.
```tsx
export function handleTransactions(bytes: Uint8Array): void {
@@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file:
1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object
2. Looping over the transactions
-3. Create a new subgraph entity for every transaction
+3. Create a new Subgraph entity for every transaction
-To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/).
+To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/).
### مصادر إضافية
diff --git a/website/src/pages/ar/sps/tutorial.mdx b/website/src/pages/ar/sps/tutorial.mdx
index 21f99fff2832..dd85fa999764 100644
--- a/website/src/pages/ar/sps/tutorial.mdx
+++ b/website/src/pages/ar/sps/tutorial.mdx
@@ -1,9 +1,9 @@
---
-title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana'
+title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana"
sidebarTitle: Tutorial
---
-Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token.
+Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token.
## Get Started
@@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs
### Step 2: Generate the Subgraph Manifest
-Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container:
+Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container:
```bash
substreams codegen subgraph
@@ -73,7 +73,7 @@ dataSources:
moduleName: map_spl_transfers # Module defined in the substreams.yaml
file: ./my-project-sol-v0.1.0.spkg
mapping:
- apiVersion: 0.0.7
+ apiVersion: 0.0.9
kind: substreams/graph-entities
file: ./src/mappings.ts
handler: handleTriggers
@@ -81,7 +81,7 @@ dataSources:
### Step 3: Define Entities in `schema.graphql`
-Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file.
+Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file.
Here is an example:
@@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s
With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory.
-The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id:
+The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id:
```ts
import { Protobuf } from 'as-proto/assembly'
@@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command:
npm run protogen
```
-This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler.
+This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler.
### Conclusion
-Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case.
+Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case.
### Video Tutorial
diff --git a/website/src/pages/ar/subgraphs/_meta-titles.json b/website/src/pages/ar/subgraphs/_meta-titles.json
index 0556abfc236c..3fd405eed29a 100644
--- a/website/src/pages/ar/subgraphs/_meta-titles.json
+++ b/website/src/pages/ar/subgraphs/_meta-titles.json
@@ -1,6 +1,6 @@
{
"querying": "Querying",
"developing": "Developing",
- "cookbook": "Cookbook",
+ "guides": "How-to Guides",
"best-practices": "Best Practices"
}
diff --git a/website/src/pages/ar/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/ar/subgraphs/best-practices/avoid-eth-calls.mdx
index e40a7b3712e4..07249c97dd2a 100644
--- a/website/src/pages/ar/subgraphs/best-practices/avoid-eth-calls.mdx
+++ b/website/src/pages/ar/subgraphs/best-practices/avoid-eth-calls.mdx
@@ -1,19 +1,19 @@
---
title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls
-sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls'
+sidebarTitle: Avoiding eth_calls
---
## TLDR
-`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`.
+`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`.
## Why Avoiding `eth_calls` Is a Best Practice
-Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed.
+Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed.
### What Does an eth_call Look Like?
-`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need:
+`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need:
```yaml
event Transfer(address indexed from, address indexed to, uint256 value);
@@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void {
}
```
-This is functional, however is not ideal as it slows down our subgraph’s indexing.
+This is functional, however is not ideal as it slows down our Subgraph’s indexing.
## How to Eliminate `eth_calls`
@@ -54,7 +54,7 @@ Ideally, the smart contract should be updated to emit all necessary data within
event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo);
```
-With this update, the subgraph can directly index the required data without external calls:
+With this update, the Subgraph can directly index the required data without external calls:
```typescript
import { Address } from '@graphprotocol/graph-ts'
@@ -96,11 +96,11 @@ The portion highlighted in yellow is the call declaration. The part before the c
The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call.
-Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0.
+Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0.
## Conclusion
-You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs.
+You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs.
## Subgraph Best Practices 1-6
diff --git a/website/src/pages/ar/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/ar/subgraphs/best-practices/derivedfrom.mdx
index db3a49928c89..093eb29255ab 100644
--- a/website/src/pages/ar/subgraphs/best-practices/derivedfrom.mdx
+++ b/website/src/pages/ar/subgraphs/best-practices/derivedfrom.mdx
@@ -1,11 +1,11 @@
---
title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom
-sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom'
+sidebarTitle: Arrays with @derivedFrom
---
## TLDR
-Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly.
+Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly.
## How to Use the `@derivedFrom` Directive
@@ -15,7 +15,7 @@ You just need to add a `@derivedFrom` directive after your array in your schema.
comments: [Comment!]! @derivedFrom(field: "post")
```
-`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient.
+`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient.
### Example Use Case for `@derivedFrom`
@@ -60,17 +60,17 @@ type Comment @entity {
Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded.
-This will not only make our subgraph more efficient, but it will also unlock three features:
+This will not only make our Subgraph more efficient, but it will also unlock three features:
1. We can query the `Post` and see all of its comments.
2. We can do a reverse lookup and query any `Comment` and see which post it comes from.
-3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings.
+3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings.
## Conclusion
-Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval.
+Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval.
For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/).
diff --git a/website/src/pages/ar/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/ar/subgraphs/best-practices/grafting-hotfix.mdx
index b77a40a5be90..d8de3e7a1fa2 100644
--- a/website/src/pages/ar/subgraphs/best-practices/grafting-hotfix.mdx
+++ b/website/src/pages/ar/subgraphs/best-practices/grafting-hotfix.mdx
@@ -1,26 +1,26 @@
---
title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment
-sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing'
+sidebarTitle: Grafting and Hotfixing
---
## TLDR
-Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones.
+Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones.
### نظره عامة
-This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services.
+This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services.
## Benefits of Grafting for Hotfixes
1. **Rapid Deployment**
- - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing.
- - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted.
+ - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing.
+ - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted.
2. **Data Preservation**
- - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records.
+ - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records.
- **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data.
3. **Efficiency**
@@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati
1. **Initial Deployment Without Grafting**
- - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected.
- - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes.
+ - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected.
+ - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes.
2. **Implementing the Hotfix with Grafting**
- **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event.
- - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix.
- - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph.
- - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible.
+ - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix.
+ - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph.
+ - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible.
3. **Post-Hotfix Actions**
- - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue.
- - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance.
+ - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue.
+ - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance.
> Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance.
- - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph.
+ - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph.
4. **Important Considerations**
- **Careful Block Selection**: Choose the graft block number carefully to prevent data loss.
- **Tip**: Use the block number of the last correctly processed event.
- - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID.
- - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment.
- - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features.
+ - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID.
+ - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment.
+ - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features.
## Example: Deploying a Hotfix with Grafting
-Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix.
+Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix.
1. **Failed Subgraph Manifest (subgraph.yaml)**
```yaml
- specVersion: 1.0.0
+ specVersion: 1.3.0
schema:
file: ./schema.graphql
dataSources:
@@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
startBlock: 5000000
mapping:
kind: ethereum/events
- apiVersion: 0.0.7
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Withdrawal
@@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
2. **New Grafted Subgraph Manifest (subgraph.yaml)**
```yaml
- specVersion: 1.0.0
+ specVersion: 1.3.0
schema:
file: ./schema.graphql
dataSources:
@@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
startBlock: 6000001 # Block after the last indexed block
mapping:
kind: ethereum/events
- apiVersion: 0.0.7
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Withdrawal
@@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
features:
- grafting
graft:
- base: QmBaseDeploymentID # Deployment ID of the failed subgraph
+ base: QmBaseDeploymentID # Deployment ID of the failed Subgraph
block: 6000000 # Last successfully indexed block
```
**Explanation:**
-- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract.
+- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract.
- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error.
- **Grafting Configuration**:
- - **base**: Deployment ID of the failed subgraph.
+ - **base**: Deployment ID of the failed Subgraph.
- **block**: Block number where grafting should begin.
3. **Deployment Steps**
@@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
- **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations.
- **Deploy the Subgraph**:
- Authenticate with the Graph CLI.
- - Deploy the new subgraph using `graph deploy`.
+ - Deploy the new Subgraph using `graph deploy`.
4. **Post-Deployment**
- - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point.
+ - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point.
- **Monitor Data**: Ensure that new data is being captured and the hotfix is effective.
- **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability.
@@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance.
-- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema.
+- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema.
- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing.
-- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability.
+- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability.
### Risk Management
@@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec
## Conclusion
-Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to:
+Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to:
- **Quickly Recover** from critical errors without re-indexing.
- **Preserve Historical Data**, maintaining continuity for applications and users.
- **Ensure Service Availability** by minimizing downtime during critical fixes.
-However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability.
+However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability.
## مصادر إضافية
- **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting
- **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID.
-By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable.
+By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable.
## Subgraph Best Practices 1-6
diff --git a/website/src/pages/ar/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/ar/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx
index 6ff60ec9ab34..3a633244e0f2 100644
--- a/website/src/pages/ar/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx
+++ b/website/src/pages/ar/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx
@@ -1,6 +1,6 @@
---
title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs
-sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs'
+sidebarTitle: Immutable Entities and Bytes as IDs
---
## TLDR
@@ -50,12 +50,12 @@ While other types for IDs are possible, such as String and Int8, it is recommend
### Reasons to Not Use Bytes as IDs
1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used.
-2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used.
+2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used.
3. Indexing and querying performance improvements are not desired.
### Concatenating With Bytes as IDs
-It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance.
+It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance.
Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant.
@@ -172,7 +172,7 @@ Query Response:
## Conclusion
-Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds.
+Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds.
Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/).
diff --git a/website/src/pages/ar/subgraphs/best-practices/pruning.mdx b/website/src/pages/ar/subgraphs/best-practices/pruning.mdx
index 1b51dde8894f..2d4f9ad803e0 100644
--- a/website/src/pages/ar/subgraphs/best-practices/pruning.mdx
+++ b/website/src/pages/ar/subgraphs/best-practices/pruning.mdx
@@ -1,11 +1,11 @@
---
title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning
-sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints'
+sidebarTitle: Pruning with indexerHints
---
## TLDR
-[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph.
+[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph.
## How to Prune a Subgraph With `indexerHints`
@@ -13,14 +13,14 @@ Add a section called `indexerHints` in the manifest.
`indexerHints` has three `prune` options:
-- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0.
+- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0.
- `prune: `: Sets a custom limit on the number of historical blocks to retain.
- `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired.
-We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`:
+We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`:
```yaml
-specVersion: 1.0.0
+specVersion: 1.3.0
schema:
file: ./schema.graphql
indexerHints:
@@ -39,7 +39,7 @@ dataSources:
## Conclusion
-Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements.
+Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements.
## Subgraph Best Practices 1-6
diff --git a/website/src/pages/ar/subgraphs/best-practices/timeseries.mdx b/website/src/pages/ar/subgraphs/best-practices/timeseries.mdx
index 74e56c406044..d713d6cd8864 100644
--- a/website/src/pages/ar/subgraphs/best-practices/timeseries.mdx
+++ b/website/src/pages/ar/subgraphs/best-practices/timeseries.mdx
@@ -1,11 +1,11 @@
---
title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations
-sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations'
+sidebarTitle: Timeseries and Aggregations
---
## TLDR
-Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance.
+Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance.
## نظره عامة
@@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri
## How to Implement Timeseries and Aggregations
+### Prerequisites
+
+You need `spec version 1.1.0` for this feature.
+
### Defining Timeseries Entities
A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements:
@@ -51,7 +55,7 @@ Example:
type Data @entity(timeseries: true) {
id: Int8!
timestamp: Timestamp!
- price: BigDecimal!
+ amount: BigDecimal!
}
```
@@ -68,11 +72,11 @@ Example:
type Stats @aggregation(intervals: ["hour", "day"], source: "Data") {
id: Int8!
timestamp: Timestamp!
- sum: BigDecimal! @aggregate(fn: "sum", arg: "price")
+ sum: BigDecimal! @aggregate(fn: "sum", arg: "amount")
}
```
-In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum.
+In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum.
### Querying Aggregated Data
@@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar
### Conclusion
-Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach:
+Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach:
- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead.
- Simplifies Development: Eliminates the need for manual aggregation logic in mappings.
- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness.
-By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs.
+By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs.
## Subgraph Best Practices 1-6
diff --git a/website/src/pages/ar/subgraphs/billing.mdx b/website/src/pages/ar/subgraphs/billing.mdx
index e5b5deb5c4ef..71e44f86c1ab 100644
--- a/website/src/pages/ar/subgraphs/billing.mdx
+++ b/website/src/pages/ar/subgraphs/billing.mdx
@@ -4,12 +4,14 @@ title: الفوترة
## Querying Plans
-There are two plans to use when querying subgraphs on The Graph Network.
+There are two plans to use when querying Subgraphs on The Graph Network.
- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp.
- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases.
+Learn more about pricing [here](https://thegraph.com/studio-pricing/).
+
## Query Payments with credit card
diff --git a/website/src/pages/ar/subgraphs/cookbook/arweave.mdx b/website/src/pages/ar/subgraphs/cookbook/arweave.mdx
index c1ec421993b4..4bb8883b4bd0 100644
--- a/website/src/pages/ar/subgraphs/cookbook/arweave.mdx
+++ b/website/src/pages/ar/subgraphs/cookbook/arweave.mdx
@@ -2,7 +2,7 @@
title: Building Subgraphs on Arweave
---
-> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs!
+> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs!
In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain.
@@ -25,12 +25,12 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are
To be able to build and deploy Arweave Subgraphs, you need two packages:
-1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`.
-2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`.
+1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`.
+2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`.
## Subgraph's components
-There are three components of a subgraph:
+There are three components of a Subgraph:
### 1. Manifest - `subgraph.yaml`
@@ -40,25 +40,25 @@ Defines the data sources of interest, and how they should be processed. Arweave
Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body.
-The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema).
+The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema).
### 3. AssemblyScript Mappings - `mapping.ts`
This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed.
-During subgraph development there are two key commands:
+During Subgraph development there are two key commands:
```
$ graph codegen # generates types from the schema file identified in the manifest
-$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder
+$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder
```
## تعريف Subgraph Manifest
-The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph:
+The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph:
```yaml
-specVersion: 0.0.5
+specVersion: 1.3.0
description: Arweave Blocks Indexing
schema:
file: ./schema.graphql # link to the schema file
@@ -70,7 +70,7 @@ dataSources:
owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet
startBlock: 0 # set this to 0 to start indexing from chain genesis
mapping:
- apiVersion: 0.0.5
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/blocks.ts # link to the file with the Assemblyscript mappings
entities:
@@ -82,7 +82,7 @@ dataSources:
- handler: handleTx # the function name in the mapping file
```
-- Arweave subgraphs introduce a new kind of data source (`arweave`)
+- Arweave Subgraphs introduce a new kind of data source (`arweave`)
- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet`
- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet
@@ -99,7 +99,7 @@ Arweave data sources support two types of handlers:
## تعريف المخطط
-Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
+Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
## AssemblyScript Mappings
@@ -152,7 +152,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi
## Deploying an Arweave Subgraph in Subgraph Studio
-Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command.
+Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command.
```bash
graph deploy --access-token
@@ -160,25 +160,25 @@ graph deploy --access-token
## Querying an Arweave Subgraph
-The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information.
+The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information.
## أمثلة على الـ Subgraphs
-Here is an example subgraph for reference:
+Here is an example Subgraph for reference:
-- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions)
+- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions)
## FAQ
-### Can a subgraph index Arweave and other chains?
+### Can a Subgraph index Arweave and other chains?
-No, a subgraph can only support data sources from one chain/network.
+No, a Subgraph can only support data sources from one chain/network.
### Can I index the stored files on Arweave?
Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions).
-### Can I identify Bundlr bundles in my subgraph?
+### Can I identify Bundlr bundles in my Subgraph?
This is not currently supported.
@@ -188,7 +188,7 @@ The source.owner can be the user's public key or account address.
### What is the current encryption format?
-Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/).
+Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/).
The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`:
diff --git a/website/src/pages/ar/subgraphs/cookbook/enums.mdx b/website/src/pages/ar/subgraphs/cookbook/enums.mdx
index 9508aa864b6c..846faecc1706 100644
--- a/website/src/pages/ar/subgraphs/cookbook/enums.mdx
+++ b/website/src/pages/ar/subgraphs/cookbook/enums.mdx
@@ -10,7 +10,7 @@ Enums, or enumeration types, are a specific data type that allows you to define
### Example of Enums in Your Schema
-If you're building a subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned.
+If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned.
You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity.
@@ -65,7 +65,7 @@ Enums provide type safety, minimize typo risks, and ensure consistent and reliab
> Note: The following guide uses the CryptoCoven NFT smart contract.
-To define enums for the various marketplaces where NFTs are traded, use the following in your subgraph schema:
+To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema:
```gql
# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint)
@@ -80,7 +80,7 @@ enum Marketplace {
## Using Enums for NFT Marketplaces
-Once defined, enums can be used throughout your subgraph to categorize transactions or events.
+Once defined, enums can be used throughout your Subgraph to categorize transactions or events.
For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum.
diff --git a/website/src/pages/ar/subgraphs/cookbook/grafting.mdx b/website/src/pages/ar/subgraphs/cookbook/grafting.mdx
index 704e7df3f3f6..4b7dad1a54d9 100644
--- a/website/src/pages/ar/subgraphs/cookbook/grafting.mdx
+++ b/website/src/pages/ar/subgraphs/cookbook/grafting.mdx
@@ -2,13 +2,13 @@
title: Replace a Contract and Keep its History With Grafting
---
-In this guide, you will learn how to build and deploy new subgraphs by grafting existing subgraphs.
+In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs.
## What is Grafting?
-Grafting reuses the data from an existing subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed. Also, it can be used when adding a feature to a subgraph that takes long to index from scratch.
+Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch.
-The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways:
+The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways:
- يضيف أو يزيل أنواع الكيانات
- يزيل الصفات من أنواع الكيانات
@@ -22,38 +22,38 @@ For more information, you can check:
- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs)
-In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract.
+In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract.
## Important Note on Grafting When Upgrading to the Network
-> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network
+> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network
### Why Is This Important?
-Grafting is a powerful feature that allows you to "graft" one subgraph onto another, effectively transferring historical data from the existing subgraph to a new version. It is not possible to graft a subgraph from The Graph Network back to Subgraph Studio.
+Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio.
### Best Practices
-**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected.
+**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected.
-**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data.
+**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data.
By adhering to these guidelines, you minimize risks and ensure a smoother migration process.
## Building an Existing Subgraph
-Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided:
+Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided:
- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial)
-> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit).
+> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit).
## تعريف Subgraph Manifest
-The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use:
+The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use:
```yaml
-specVersion: 0.0.4
+specVersion: 1.3.0
schema:
file: ./schema.graphql
dataSources:
@@ -66,7 +66,7 @@ dataSources:
startBlock: 5955690
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Withdrawal
@@ -85,27 +85,27 @@ dataSources:
## Grafting Manifest Definition
-Grafting requires adding two new items to the original subgraph manifest:
+Grafting requires adding two new items to the original Subgraph manifest:
```yaml
---
features:
- grafting # feature name
graft:
- base: Qm... # subgraph ID of base subgraph
+ base: Qm... # Subgraph ID of base Subgraph
block: 5956000 # block number
```
- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features).
-- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on.
+- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on.
-The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting
+The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting
## Deploying the Base Subgraph
-1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example`
-2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo
-3. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground
+1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example`
+2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo
+3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground
```graphql
{
@@ -138,16 +138,16 @@ It returns something like this:
}
```
-Once you have verified the subgraph is indexing properly, you can quickly update the subgraph with grafting.
+Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting.
## Deploying the Grafting Subgraph
The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc.
-1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement`
-2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio.
-3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo
-4. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground
+1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement`
+2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio.
+3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo
+4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground
```graphql
{
@@ -185,9 +185,9 @@ It should return the following:
}
```
-You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph.
+You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph.
-Congrats! You have successfully grafted a subgraph onto another subgraph.
+Congrats! You have successfully grafted a Subgraph onto another Subgraph.
## مصادر إضافية
diff --git a/website/src/pages/ar/subgraphs/cookbook/near.mdx b/website/src/pages/ar/subgraphs/cookbook/near.mdx
index bdbe8e518a6b..04daec8b6ac7 100644
--- a/website/src/pages/ar/subgraphs/cookbook/near.mdx
+++ b/website/src/pages/ar/subgraphs/cookbook/near.mdx
@@ -2,17 +2,17 @@
title: بناء Subgraphs على NEAR
---
-This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/).
+This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/).
## ما هو NEAR؟
[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information.
-## ماهي NEAR subgraphs؟
+## What are NEAR Subgraphs?
-The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts.
+The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts.
-Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs:
+Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs:
- معالجات الكتل(Block handlers): يتم تشغيلها على كل كتلة جديدة
- معالجات الاستلام (Receipt handlers): يتم تشغيلها في كل مرة يتم فيها تنفيذ رسالة على حساب محدد
@@ -23,35 +23,35 @@ Subgraphs are event-based, which means that they listen for and then process onc
## بناء NEAR Subgraph
-`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs.
+`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs.
-`@graphprotocol/graph-ts` is a library of subgraph-specific types.
+`@graphprotocol/graph-ts` is a library of Subgraph-specific types.
-NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`.
+NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`.
-> Building a NEAR subgraph is very similar to building a subgraph that indexes Ethereum.
+> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum.
-هناك ثلاثة جوانب لتعريف الـ subgraph:
+There are three aspects of Subgraph definition:
-**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source.
+**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source.
-**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema).
+**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema).
**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality.
-During subgraph development there are two key commands:
+During Subgraph development there are two key commands:
```bash
$ graph codegen # generates types from the schema file identified in the manifest
-$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the subgraph files in a /build folder
+$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder
```
### تعريف Subgraph Manifest
-The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph:
+The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph:
```yaml
-specVersion: 0.0.2
+specVersion: 1.3.0
schema:
file: ./src/schema.graphql # link to the schema file
dataSources:
@@ -61,7 +61,7 @@ dataSources:
account: app.good-morning.near # This data source will monitor this account
startBlock: 10662188 # Required for NEAR
mapping:
- apiVersion: 0.0.5
+ apiVersion: 0.0.9
language: wasm/assemblyscript
blockHandlers:
- handler: handleNewBlock # the function name in the mapping file
@@ -70,7 +70,7 @@ dataSources:
file: ./src/mapping.ts # link to the file with the Assemblyscript mappings
```
-- NEAR subgraphs introduce a new `kind` of data source (`near`)
+- NEAR Subgraphs introduce a new `kind` of data source (`near`)
- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet`
- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account.
- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted.
@@ -92,7 +92,7 @@ accounts:
### تعريف المخطط
-Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
+Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
### AssemblyScript Mappings
@@ -165,31 +165,31 @@ These types are passed to block & receipt handlers:
- Block handlers will receive a `Block`
- Receipt handlers will receive a `ReceiptWithOutcome`
-Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution.
+Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution.
This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs.
## نشر NEAR Subgraph
-Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released).
+Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released).
Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names:
- `near-mainnet`
- `near-testnet`
-More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/).
+More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/).
-As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph".
+As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph".
-Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command:
+Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command:
```sh
-$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
-$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash
+$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
+$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
```
-The node configuration will depend on where the subgraph is being deployed.
+The node configuration will depend on where the Subgraph is being deployed.
### Subgraph Studio
@@ -204,7 +204,7 @@ graph deploy
graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001
```
-بمجرد نشر الـ subgraph الخاص بك ، سيتم فهرسته بواسطة Graph Node. يمكنك التحقق من تقدمه عن طريق الاستعلام عن الـ subgraph نفسه:
+Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself:
```graphql
{
@@ -228,11 +228,11 @@ graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001 Important Reminders:
+>
+> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/).
+> - This feature requires `specVersion` 1.3.0.
+
+## نظره عامة
+
+Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates.
+
+## Prerequisites
+
+To deploy **all** Subgraphs locally, you must have the following:
+
+- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally
+- An [IPFS](https://docs.ipfs.tech/) instance running locally
+- [Node.js](https://nodejs.org) and npm
+
+## Get Started
+
+The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph.
+
+### Specifics
+
+- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts.
+- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality.
+- Each source Subgraph is optimized with a specific entity.
+- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance.
+
+### Step 1. Deploy Block Time Source Subgraph
+
+This first source Subgraph calculates the block time for each block.
+
+- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined.
+- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly.
+
+To deploy this Subgraph locally, run the following commands:
+
+```bash
+npm install
+npm run codegen
+npm run build
+npm run create-local
+npm run deploy-local
+```
+
+### Step 2. Deploy Block Cost Source Subgraph
+
+This second source Subgraph indexes the cost of each block.
+
+#### Key Functions
+
+- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields.
+- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly.
+
+To deploy this Subgraph locally, run the same commands as above.
+
+### Step 3. Define Block Size in Source Subgraph
+
+This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above.
+
+#### Key Functions
+
+- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size.
+- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly.
+
+### Step 4. Combine Into Block Stats Subgraph
+
+This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above.
+
+> Note:
+>
+> - Any change to a source Subgraph will likely generate a new deployment ID.
+> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes.
+> - All source Subgraphs should be deployed before the composed Subgraph is deployed.
+
+#### Key Functions
+
+- It provides a consolidated data model that encompasses all relevant block metrics.
+- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses.
+
+## Key Takeaways
+
+- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs.
+- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph.
+- This feature unlocks scalability, simplifying both development and maintenance efficiency.
+
+## مصادر إضافية
+
+- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example).
+- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/).
+- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations).
diff --git a/website/src/pages/ar/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/ar/subgraphs/cookbook/subgraph-composition.mdx
new file mode 100644
index 000000000000..68f637752b46
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/cookbook/subgraph-composition.mdx
@@ -0,0 +1,139 @@
+---
+title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base
+sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base
+---
+
+Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it.
+
+> Important Reminders:
+>
+> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/).
+> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code.
+> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world.
+
+## مقدمة
+
+Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset.
+
+### Benefits of Composition
+
+Subgraph composition is a powerful feature for scaling, allowing you to:
+
+- Reuse, mix, and combine existing data
+- Streamline development and queries
+- Use multiple data sources (up to five source Subgraphs)
+- Speed up your Subgraph's syncing speed
+- Handle errors and optimize the resync
+
+## Architecture Overview
+
+The setup for this example involves two Subgraphs:
+
+1. **Source Subgraph**: Tracks event data as entities.
+2. **Dependent Subgraph**: Uses the source Subgraph as a data source.
+
+You can find these in the `source` and `dependent` directories.
+
+- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts.
+- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers.
+
+While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature.
+
+### Source Subgraph
+
+The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`.
+
+> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml).
+
+### Dependent Subgraph
+
+The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities.
+
+> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml).
+
+## Get Started
+
+The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses:
+
+- Sushiswap v3 Subgraph on Base chain
+- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development).
+
+### Step 1. Set Up Your Source Subgraph
+
+To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`:
+
+```yaml
+specVersion: 1.3.0
+schema:
+ file: ./schema.graphql
+dataSources:
+ - kind: subgraph
+ name: Factory
+ network: base
+ source:
+ address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz'
+ startBlock: 82522
+```
+
+Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin.
+
+### Step 2. Define Handlers in Dependent Subgraph
+
+Below is an example of defining handlers in the dependent Subgraph:
+
+```typescript
+export function handleInitialize(trigger: EntityTrigger): void {
+ if (trigger.operation === EntityOp.Create) {
+ let entity = trigger.data
+ let poolAddressParam = Address.fromBytes(entity.poolAddress)
+
+ // Update pool sqrt price and tick
+ let pool = Pool.load(poolAddressParam.toHexString()) as Pool
+ pool.sqrtPrice = entity.sqrtPriceX96
+ pool.tick = BigInt.fromI32(entity.tick)
+ pool.save()
+
+ // Update token prices
+ let token0 = Token.load(pool.token0) as Token
+ let token1 = Token.load(pool.token1) as Token
+
+ // Update ETH price in USD
+ let bundle = Bundle.load('1') as Bundle
+ bundle.ethPriceUSD = getEthPriceInUSD()
+ bundle.save()
+
+ updatePoolDayData(entity)
+ updatePoolHourData(entity)
+
+ // Update derived ETH price for tokens
+ token0.derivedETH = findEthPerToken(token0)
+ token1.derivedETH = findEthPerToken(token1)
+ token0.save()
+ token1.save()
+ }
+}
+```
+
+In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity.
+
+`EntityTrigger` has three fields:
+
+1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`.
+2. `type`: Indicates the entity type.
+3. `data`: Contains the entity data.
+
+Developers can then determine specific actions for the entity data based on the operation type.
+
+## Key Takeaways
+
+- Use this powerful tool to quickly scale your Subgraph development and reuse existing data.
+- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph.
+- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities.
+
+This approach unlocks composability and scalability, simplifying both development and maintenance efficiency.
+
+## مصادر إضافية
+
+To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph).
+
+To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example).
diff --git a/website/src/pages/ar/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/ar/subgraphs/cookbook/subgraph-debug-forking.mdx
index 3bacc1f60003..364fb8ce4d9c 100644
--- a/website/src/pages/ar/subgraphs/cookbook/subgraph-debug-forking.mdx
+++ b/website/src/pages/ar/subgraphs/cookbook/subgraph-debug-forking.mdx
@@ -2,23 +2,23 @@
title: Quick and Easy Subgraph Debugging Using Forks
---
-As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging!
+As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging!
## حسنا، ما هو؟
-**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one).
+**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one).
-In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_.
+In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_.
## ماذا؟! كيف؟
-When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_.
+When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_.
-In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state.
+In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state.
## من فضلك ، أرني بعض الأكواد!
-To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract.
+To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract.
Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever:
@@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void {
}
```
-Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error.
+Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error.
الطريقة المعتادة لمحاولة الإصلاح هي:
1. إجراء تغيير في مصدر الـ mappings ، والذي تعتقد أنه سيحل المشكلة (وأنا أعلم أنه لن يحلها).
-2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node).
+2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node).
3. الانتظار حتى تتم المزامنة.
4. إذا حدثت المشكلة مرة أخرى ، فارجع إلى 1!
It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._
-Using **subgraph forking** we can essentially eliminate this step. Here is how it looks:
+Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks:
0. Spin-up a local Graph Node with the **_appropriate fork-base_** set.
1. قم بإجراء تغيير في مصدر الـ mappings ، والذي تعتقد أنه سيحل المشكلة.
-2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**.
+2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**.
3. إذا حدثت المشكلة مرة أخرى ، فارجع إلى 1!
الآن ، قد يكون لديك سؤالان:
@@ -69,18 +69,18 @@ Using **subgraph forking** we can essentially eliminate this step. Here is how i
وأنا أجيب:
-1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store.
+1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store.
2. الـتفريع سهل ، فلا داعي للقلق:
```bash
$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020
```
-Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork!
+Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork!
لذلك ، هذا ما أفعله:
-1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/).
+1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/).
```
$ cargo run -p graph-node --release -- \
@@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \
```
2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex.
-3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`:
+3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`:
```bash
$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020
```
4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working.
-5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho)
+5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho)
diff --git a/website/src/pages/ar/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/ar/subgraphs/cookbook/subgraph-uncrashable.mdx
index 0cc91a0fa2c3..a08e2a7ad8c9 100644
--- a/website/src/pages/ar/subgraphs/cookbook/subgraph-uncrashable.mdx
+++ b/website/src/pages/ar/subgraphs/cookbook/subgraph-uncrashable.mdx
@@ -2,23 +2,23 @@
title: Safe Subgraph Code Generator
---
-[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent.
+[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent.
## Why integrate with Subgraph Uncrashable?
-- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity.
+- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity.
-- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic.
+- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic.
-- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy.
+- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy.
**Key Features**
-- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification.
+- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification.
- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function.
-- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy.
+- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy.
Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command.
@@ -26,4 +26,4 @@ Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen
graph codegen -u [options] []
```
-Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs.
+Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs.
diff --git a/website/src/pages/ar/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/ar/subgraphs/cookbook/transfer-to-the-graph.mdx
index f713ec3a5e76..4be3dcedffe8 100644
--- a/website/src/pages/ar/subgraphs/cookbook/transfer-to-the-graph.mdx
+++ b/website/src/pages/ar/subgraphs/cookbook/transfer-to-the-graph.mdx
@@ -1,14 +1,14 @@
---
-title: Tranfer to The Graph
+title: Transfer to The Graph
---
-Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/).
+Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/).
## Benefits of Switching to The Graph
-- Use the same subgraph that your apps already use with zero-downtime migration.
+- Use the same Subgraph that your apps already use with zero-downtime migration.
- Increase reliability from a global network supported by 100+ Indexers.
-- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team.
+- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team.
## Upgrade Your Subgraph to The Graph in 3 Easy Steps
@@ -21,9 +21,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n
### Create a Subgraph in Subgraph Studio
- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet.
-- Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name".
+- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name".
-> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly.
+> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly.
### Install the Graph CLI
@@ -37,7 +37,7 @@ Using [npm](https://www.npmjs.com/):
npm install -g @graphprotocol/graph-cli@latest
```
-Use the following command to create a subgraph in Studio using the CLI:
+Use the following command to create a Subgraph in Studio using the CLI:
```sh
graph init --product subgraph-studio
@@ -53,7 +53,7 @@ graph auth
## 2. Deploy Your Subgraph to Studio
-If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph.
+If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph.
In The Graph CLI, run the following command:
@@ -62,7 +62,7 @@ graph deploy --ipfs-hash
```
-> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1).
+> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1).
## 3. Publish Your Subgraph to The Graph Network
@@ -70,17 +70,17 @@ graph deploy --ipfs-hash
### Query Your Subgraph
-> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph.
+> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph.
-You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio.
+You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio.
#### Example
-[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari:
+[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari:

-The query URL for this subgraph is:
+The query URL for this Subgraph is:
```sh
https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK
@@ -96,9 +96,9 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the
### Monitor Subgraph Status
-Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/).
+Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/).
### مصادر إضافية
-- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/).
-- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/).
+- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/).
+- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/).
diff --git a/website/src/pages/ar/subgraphs/developing/creating/advanced.mdx b/website/src/pages/ar/subgraphs/developing/creating/advanced.mdx
index d0f9bb2cc348..c35d101f373e 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/advanced.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/advanced.mdx
@@ -4,9 +4,9 @@ title: Advanced Subgraph Features
## نظره عامة
-Add and implement advanced subgraph features to enhanced your subgraph's built.
+Add and implement advanced Subgraph features to enhanced your Subgraph's built.
-Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below:
+Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below:
| Feature | Name |
| ---------------------------------------------------- | ---------------- |
@@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar
| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` |
| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` |
-For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be:
+For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be:
```yaml
-specVersion: 0.0.4
+specVersion: 1.3.0
description: Gravatar for Ethereum
features:
- fullTextSearch
@@ -25,7 +25,7 @@ features:
dataSources: ...
```
-> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used.
+> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used.
## Timeseries and Aggregations
@@ -33,9 +33,9 @@ Prerequisites:
- Subgraph specVersion must be ≥1.1.0.
-Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more.
+Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more.
-This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL.
+This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL.
### Example Schema
@@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified
## أخطاء غير فادحة
-Indexing errors on already synced subgraphs will, by default, cause the subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives subgraph authors time to correct their subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic.
+Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic.
-> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio.
+> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio.
-Enabling non-fatal errors requires setting the following feature flag on the subgraph manifest:
+Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest:
```yaml
-specVersion: 0.0.4
+specVersion: 1.3.0
description: Gravatar for Ethereum
features:
- nonFatalErrors
...
```
-The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example:
+The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example:
```graphql
foos(first: 100, subgraphError: allow) {
@@ -123,7 +123,7 @@ _meta {
}
```
-If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response:
+If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response:
```graphql
"data": {
@@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a
## IPFS/Arweave File Data Sources
-File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave.
+File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave.
> This also lays the groundwork for deterministic indexing of off-chain data, as well as the potential introduction of arbitrary HTTP-sourced data.
@@ -221,7 +221,7 @@ templates:
- name: TokenMetadata
kind: file/ipfs
mapping:
- apiVersion: 0.0.7
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/mapping.ts
handler: handleMetadata
@@ -290,7 +290,7 @@ Example:
import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates'
const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm'
-//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs.
+//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs.
export function handleTransfer(event: TransferEvent): void {
let token = Token.load(event.params.tokenId.toString())
@@ -317,23 +317,23 @@ This will create a new file data source, which will poll Graph Node's configured
This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity.
-> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file
+> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file
Congratulations, you are using file data sources!
-#### Deploying your subgraphs
+#### Deploying your Subgraphs
-You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0.
+You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0.
#### Limitations
-File data source handlers and entities are isolated from other subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific:
+File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific:
- Entities created by File Data Sources are immutable, and cannot be updated
- File Data Source handlers cannot access entities from other file data sources
- Entities associated with File Data Sources cannot be accessed by chain-based handlers
-> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a subgraph!
+> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph!
Additionally, it is not possible to create data sources from a file data source, be it an onchain data source or another file data source. This restriction may be lifted in the future.
@@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra
> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0`
-Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments.
+Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments.
-- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data.
+- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data.
-- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain.
+- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain.
### How Topic Filters Work
-When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments.
+When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments.
- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event.
@@ -401,7 +401,7 @@ In this example:
#### Configuration in Subgraphs
-Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured:
+Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured:
```yaml
eventHandlers:
@@ -436,7 +436,7 @@ In this configuration:
- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender.
- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver.
-- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`.
+- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`.
#### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses
@@ -452,17 +452,17 @@ In this configuration:
- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender.
- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver.
-- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses.
+- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses.
## Declared eth_call
> Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node.
-Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel.
+Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel.
This feature does the following:
-- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency.
+- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency.
- Allows faster data fetching, resulting in quicker query responses and a better user experience.
- Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient.
@@ -474,7 +474,7 @@ This feature does the following:
#### Scenario without Declarative `eth_calls`
-Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings.
+Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings.
Traditionally, these calls might be made sequentially:
@@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds
#### How it Works
-1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel.
+1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel.
2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously.
-3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing.
+3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing.
#### Example Configuration in Subgraph Manifest
Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`.
-`Subgraph.yaml` using `event.address`:
+`subgraph.yaml` using `event.address`:
```yaml
eventHandlers:
@@ -524,7 +524,7 @@ Details for the example above:
- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)`
- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed.
-`Subgraph.yaml` using `event.params`
+`subgraph.yaml` using `event.params`
```yaml
calls:
@@ -535,22 +535,22 @@ calls:
> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network).
-When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed.
+When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed.
-A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level:
+A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level:
```yaml
description: ...
graft:
- base: Qm... # Subgraph ID of base subgraph
+ base: Qm... # Subgraph ID of base Subgraph
block: 7345624 # Block number
```
-When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph.
+When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph.
-Because grafting copies rather than indexes base data, it is much quicker to get the subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large subgraphs. While the grafted subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied.
+Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied.
-The grafted subgraph can use a GraphQL schema that is not identical to the one of the base subgraph, but merely compatible with it. It has to be a valid subgraph schema in its own right, but may deviate from the base subgraph's schema in the following ways:
+The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways:
- يضيف أو يزيل أنواع الكيانات
- يزيل الصفات من أنواع الكيانات
@@ -560,4 +560,4 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o
- It adds or removes interfaces
- يغير للكيانات التي يتم تنفيذ الواجهة لها
-> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest.
+> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest.
diff --git a/website/src/pages/ar/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/ar/subgraphs/developing/creating/assemblyscript-mappings.mdx
index 2518d7620204..3062fe900657 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/assemblyscript-mappings.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/assemblyscript-mappings.mdx
@@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t
For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled.
-In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events:
+In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events:
```javascript
import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity'
@@ -72,7 +72,7 @@ If no value is set for a field in the new entity with the same ID, the field wil
## توليد الكود
-In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the subgraph's GraphQL schema and the contract ABIs included in the data sources.
+In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources.
This is done with
@@ -80,7 +80,7 @@ This is done with
graph codegen [--output-dir ] []
```
-but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same:
+but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same:
```sh
# Yarn
@@ -90,7 +90,7 @@ yarn codegen
npm run codegen
```
-This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with.
+This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with.
```javascript
import {
@@ -102,12 +102,12 @@ import {
} from '../generated/Gravity/Gravity'
```
-In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with
+In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with
```javascript
'import { Gravatar } from '../generated/schema
```
-> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph.
+> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph.
-Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find.
+Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find.
diff --git a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md
index 5d90888ac378..5f964d3cbb78 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md
+++ b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/CHANGELOG.md
@@ -1,5 +1,11 @@
# @graphprotocol/graph-ts
+## 0.38.0
+
+### Minor Changes
+
+- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings
+
## 0.37.0
### Minor Changes
diff --git a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx
index 8245a637cc8a..ef43760cfdbf 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/api.mdx
@@ -2,12 +2,12 @@
title: AssemblyScript API
---
-> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/).
+> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/).
-Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box:
+Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box:
- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`)
-- Code generated from subgraph files by `graph codegen`
+- Code generated from Subgraph files by `graph codegen`
You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript).
@@ -27,18 +27,18 @@ The `@graphprotocol/graph-ts` library provides the following APIs:
### إصدارات
-The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph.
+The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph.
-| الاصدار | ملاحظات الإصدار |
-| :-: | --- |
-| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
-| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
-| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types Added `receipt` field to the Ethereum Event object |
-| 0.0.6 | Added `nonce` field to the Ethereum Transaction object Added `baseFeePerGas` to the Ethereum Block object |
-| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
-| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object |
-| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
-| 0.0.2 | Added `input` field to the Ethereum Transaction object |
+| الاصدار | ملاحظات الإصدار |
+| :-----: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
+| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
+| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types Added `receipt` field to the Ethereum Event object |
+| 0.0.6 | Added `nonce` field to the Ethereum Transaction object Added `baseFeePerGas` to the Ethereum Block object |
+| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
+| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object |
+| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
+| 0.0.2 | Added `input` field to the Ethereum Transaction object |
### الأنواع المضمنة (Built-in)
@@ -223,7 +223,7 @@ It adds the following method on top of the `Bytes` API:
The `store` API allows to load, save and remove entities from and to the Graph Node store.
-Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities.
+Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities.
#### إنشاء الكيانات
@@ -282,8 +282,8 @@ As of `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 and `@graphprotoco
The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists.
-- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip.
-- For some subgraphs, these missed lookups can contribute significantly to the indexing time.
+- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip.
+- For some Subgraphs, these missed lookups can contribute significantly to the indexing time.
```typescript
let id = event.transaction.hash // or however the ID is constructed
@@ -380,11 +380,11 @@ The Ethereum API provides access to smart contracts, public state variables, con
#### دعم أنواع الإيثيريوم
-As with entities, `graph codegen` generates classes for all smart contracts and events used in a subgraph. For this, the contract ABIs need to be part of the data source in the subgraph manifest. Typically, the ABI files are stored in an `abis/` folder.
+As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder.
-With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that subgraph authors do not have to worry about them.
+With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them.
-The following example illustrates this. Given a subgraph schema like
+The following example illustrates this. Given a Subgraph schema like
```graphql
type Transfer @entity {
@@ -483,7 +483,7 @@ class Log {
#### الوصول إلى حالة العقد الذكي Smart Contract
-The code generated by `graph codegen` also includes classes for the smart contracts used in the subgraph. These can be used to access public state variables and call functions of the contract at the current block.
+The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block.
A common pattern is to access the contract from which an event originates. This is achieved with the following code:
@@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) {
As long as the `ERC20Contract` on Ethereum has a public read-only function called `symbol`, it can be called with `.symbol()`. For public state variables a method with the same name is created automatically.
-Any other contract that is part of the subgraph can be imported from the generated code and can be bound to a valid address.
+Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address.
#### معالجة الاستدعاءات المعادة
@@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false
import { log } from '@graphprotocol/graph-ts
```
-The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument.
+The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument.
The `log` API includes the following functions:
@@ -590,7 +590,7 @@ The `log` API includes the following functions:
- `log.info(fmt: string, args: Array): void` - logs an informational message.
- `log.warning(fmt: string, args: Array): void` - logs a warning.
- `log.error(fmt: string, args: Array): void` - logs an error message.
-- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the subgraph.
+- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph.
The `log` API takes a format string and an array of string values. It then replaces placeholders with the string values from the array. The first `{}` placeholder gets replaced by the first value in the array, the second `{}` placeholder gets replaced by the second value and so on.
@@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId'))
The only flag currently supported is `json`, which must be passed to `ipfs.map`. With the `json` flag, the IPFS file must consist of a series of JSON values, one value per line. The call to `ipfs.map` will read each line in the file, deserialize it into a `JSONValue` and call the callback for each of them. The callback can then use entity operations to store data from the `JSONValue`. Entity changes are stored only when the handler that called `ipfs.map` finishes successfully; in the meantime, they are kept in memory, and the size of the file that `ipfs.map` can process is therefore limited.
-On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the subgraph is marked as failed.
+On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed.
### Crypto API
@@ -836,7 +836,7 @@ The base `Entity` class and the child `DataSourceContext` class have helpers to
### DataSourceContext in Manifest
-The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`.
+The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`.
Here is a YAML example illustrating the usage of various types in the `context` section:
@@ -887,4 +887,4 @@ dataSources:
- `List`: Specifies a list of items. Each item needs to specify its type and data.
- `BigInt`: Specifies a large integer value. Must be quoted due to its large size.
-This context is then accessible in your subgraph mapping files, enabling more dynamic and configurable subgraphs.
+This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs.
diff --git a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/common-issues.mdx
index 6c50af984ad0..b0ce00e687e3 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/graph-ts/common-issues.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/graph-ts/common-issues.mdx
@@ -2,7 +2,7 @@
title: مشاكل شائعة في أسمبلي سكريبت (AssemblyScript)
---
-هناك بعض مشاكل [أسمبلي سكريبت](https://github.com/AssemblyScript/assemblyscript) المحددة، التي من الشائع الوقوع فيها أثتاء تطوير غرافٍ فرعي. وهي تتراوح في صعوبة تصحيح الأخطاء، ومع ذلك، فإنّ إدراكها قد يساعد. وفيما يلي قائمة غير شاملة لهذه المشاكل:
+There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues:
- `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object.
- لا يتم توريث النطاق في [دوال الإغلاق](https://www.assemblyscript.org/status.html#on-closures)، أي لا يمكن استخدام المتغيرات المعلنة خارج دوال الإغلاق. الشرح في [ النقاط الهامة للمطورين #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s).
diff --git a/website/src/pages/ar/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/ar/subgraphs/developing/creating/install-the-cli.mdx
index b55d24367e50..81469bc1837b 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/install-the-cli.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/install-the-cli.mdx
@@ -2,11 +2,11 @@
title: قم بتثبيت Graph CLI
---
-> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/).
+> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/).
## نظره عامة
-The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network.
+The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network.
## Getting Started
@@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest
yarn global add @graphprotocol/graph-cli
```
-The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started.
+The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started.
## إنشاء الـ Subgraph
### من عقد موجود
-The following command creates a subgraph that indexes all events of an existing contract:
+The following command creates a Subgraph that indexes all events of an existing contract:
```sh
graph init \
@@ -51,25 +51,25 @@ graph init \
- If any of the optional arguments are missing, it guides you through an interactive form.
-- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page.
+- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page.
### من مثال Subgraph
-The following command initializes a new project from an example subgraph:
+The following command initializes a new project from an example Subgraph:
```sh
graph init --from-example=example-subgraph
```
-- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated.
+- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated.
-- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events.
+- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events.
### Add New `dataSources` to an Existing Subgraph
-`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them.
+`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them.
-Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command:
+Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command:
```sh
graph add []
@@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is
يجب أن تتطابق ملف (ملفات) ABI مع العقد (العقود) الخاصة بك. هناك عدة طرق للحصول على ملفات ABI:
- إذا كنت تقوم ببناء مشروعك الخاص ، فمن المحتمل أن تتمكن من الوصول إلى أحدث ABIs.
-- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile.
-- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail.
-
-## SpecVersion Releases
-
-| الاصدار | ملاحظات الإصدار |
-| :-: | --- |
-| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
-| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
-| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs |
-| 0.0.9 | Supports `endBlock` feature |
-| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
-| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
-| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
-| 0.0.5 | Added support for event handlers having access to transaction receipts. |
-| 0.0.4 | Added support for managing subgraph features. |
+- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile.
+- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail.
diff --git a/website/src/pages/ar/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/ar/subgraphs/developing/creating/ql-schema.mdx
index 56d9abb39ae7..c5b869610abd 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/ql-schema.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/ql-schema.mdx
@@ -4,7 +4,7 @@ title: The Graph QL Schema
## نظره عامة
-The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language.
+The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language.
> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section.
@@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar
Before defining entities, it is important to take a step back and think about how your data is structured and linked.
-- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform.
+- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform.
- It may be useful to imagine entities as "objects containing data", rather than as events or functions.
- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type.
- Each type that should be an entity is required to be annotated with an `@entity` directive.
@@ -72,16 +72,16 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two
The following scalars are supported in the GraphQL API:
-| النوع | الوصف |
-| --- | --- |
-| `Bytes` | مصفوفة Byte ، ممثلة كسلسلة سداسية عشرية. يشيع استخدامها في Ethereum hashes وعناوينه. |
-| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. |
-| `Boolean` | Scalar for `boolean` values. |
-| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. |
-| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. |
-| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. |
-| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. |
-| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. |
+| النوع | الوصف |
+| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `Bytes` | مصفوفة Byte ، ممثلة كسلسلة سداسية عشرية. يشيع استخدامها في Ethereum hashes وعناوينه. |
+| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. |
+| `Boolean` | Scalar for `boolean` values. |
+| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. |
+| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. |
+| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. |
+| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. |
+| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. |
### Enums
@@ -141,7 +141,7 @@ type TokenBalance @entity {
Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived.
-For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the subgraph. In general, storing arrays of entities should be avoided as much as is practical.
+For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical.
#### Example
@@ -160,7 +160,7 @@ type TokenBalance @entity {
}
```
-Here is an example of how to write a mapping for a subgraph with reverse lookups:
+Here is an example of how to write a mapping for a Subgraph with reverse lookups:
```typescript
let token = new Token(event.address) // Create Token
@@ -231,7 +231,7 @@ query usersWithOrganizations {
}
```
-This more elaborate way of storing many-to-many relationships will result in less data stored for the subgraph, and therefore to a subgraph that is often dramatically faster to index and to query.
+This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query.
### إضافة تعليقات إلى المخطط (schema)
@@ -287,7 +287,7 @@ query {
}
```
-> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest.
+> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest.
## اللغات المدعومة
@@ -318,7 +318,7 @@ Supported language dictionaries:
Supported algorithms for ordering results:
-| Algorithm | Description |
-| ------------- | --------------------------------------------------------------- |
-| rank | استخدم جودة مطابقة استعلام النص-الكامل (0-1) لترتيب النتائج. |
-| proximityRank | Similar to rank but also includes the proximity of the matches. |
+| Algorithm | Description |
+| ------------- | ----------------------------------------------------------------------- |
+| rank | استخدم جودة مطابقة استعلام النص-الكامل (0-1) لترتيب النتائج. |
+| proximityRank | Similar to rank but also includes the proximity of the matches. |
diff --git a/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx
index 8f2e787688c2..b7d5f7168427 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/starting-your-subgraph.mdx
@@ -4,20 +4,32 @@ title: Starting Your Subgraph
## نظره عامة
-The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs.
+The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs.
-When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL.
+When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL.
-Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs.
+Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs.
### Start Building
-Start the process and build a subgraph that matches your needs:
+Start the process and build a Subgraph that matches your needs:
1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure
-2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component
+2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component
3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema
4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings
-5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features
+5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features
Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/).
+
+| الاصدار | ملاحظات الإصدار |
+| :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
+| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/ar/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/ar/subgraphs/developing/creating/subgraph-manifest.mdx
index ba893838ca4e..8cc64d5cdd22 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/subgraph-manifest.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/subgraph-manifest.mdx
@@ -4,19 +4,19 @@ title: Subgraph Manifest
## نظره عامة
-The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query.
+The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query.
-The **subgraph definition** consists of the following files:
+The **Subgraph definition** consists of the following files:
-- `subgraph.yaml`: Contains the subgraph manifest
+- `subgraph.yaml`: Contains the Subgraph manifest
-- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL
+- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL
- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide)
### Subgraph Capabilities
-A single subgraph can:
+A single Subgraph can:
- Index data from multiple smart contracts (but not multiple networks).
@@ -24,12 +24,12 @@ A single subgraph can:
- Add an entry for each contract that requires indexing to the `dataSources` array.
-The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md).
+The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md).
-For the example subgraph listed above, `subgraph.yaml` is:
+For the example Subgraph listed above, `subgraph.yaml` is:
```yaml
-specVersion: 0.0.4
+specVersion: 1.3.0
description: Gravatar for Ethereum
repository: https://github.com/graphprotocol/graph-tooling
schema:
@@ -54,7 +54,7 @@ dataSources:
data: 'bar'
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Gravatar
@@ -79,47 +79,47 @@ dataSources:
## Subgraph Entries
-> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/).
+> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/).
الإدخالات الهامة لتحديث manifest هي:
-- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases.
+- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases.
-- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio.
+- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio.
-- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer.
+- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer.
- `features`: a list of all used [feature](#experimental-features) names.
-- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section.
+- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section.
-- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts.
+- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts.
- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created.
- `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`.
-- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development.
+- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development.
- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file.
- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings.
-- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store.
+- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store.
-- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store.
+- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store.
-- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract.
+- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract.
-A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array.
+A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array.
## Event Handlers
-Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic.
+Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic.
### Defining an Event Handler
-An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected.
+An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected.
```yaml
dataSources:
@@ -131,7 +131,7 @@ dataSources:
abi: Gravity
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Gravatar
@@ -149,11 +149,11 @@ dataSources:
## معالجات الاستدعاء(Call Handlers)
-While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured.
+While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured.
Call handlers will only trigger in one of two cases: when the function specified is called by an account other than the contract itself or when it is marked as external in Solidity and called as part of another function in the same contract.
-> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network.
+> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network.
### تعريف معالج الاستدعاء
@@ -169,7 +169,7 @@ dataSources:
abi: Gravity
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Gravatar
@@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han
### دالة الـ Mapping
-Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument:
+Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument:
```typescript
import { CreateGravatarCall } from '../generated/Gravity/Gravity'
@@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a
## معالجات الكتلة
-In addition to subscribing to contract events or function calls, a subgraph may want to update its data as new blocks are appended to the chain. To achieve this a subgraph can run a function after every block or after blocks that match a pre-defined filter.
+In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter.
### الفلاتر المدعومة
@@ -218,7 +218,7 @@ filter:
_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._
-> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing.
+> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing.
The absence of a filter for a block handler will ensure that the handler is called every block. A data source can only contain one block handler for each filter type.
@@ -232,7 +232,7 @@ dataSources:
abi: Gravity
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Gravatar
@@ -261,7 +261,7 @@ blockHandlers:
every: 10
```
-The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals.
+The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals.
#### Once Filter
@@ -276,7 +276,7 @@ blockHandlers:
kind: once
```
-The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing.
+The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing.
```ts
export function handleOnce(block: ethereum.Block): void {
@@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void {
### دالة الـ Mapping
-The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities.
+The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities.
```typescript
import { ethereum } from '@graphprotocol/graph-ts'
@@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de
Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them.
-To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false.
+To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false.
```yaml
eventHandlers:
@@ -360,7 +360,7 @@ dataSources:
abi: Factory
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/mappings/factory.ts
entities:
@@ -390,7 +390,7 @@ templates:
abi: Exchange
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/mappings/exchange.ts
entities:
@@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ
## كتل البدء
-The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created.
+The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created.
```yaml
dataSources:
@@ -467,7 +467,7 @@ dataSources:
startBlock: 6627917
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/mappings/factory.ts
entities:
@@ -488,13 +488,13 @@ dataSources:
## Indexer Hints
-The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning.
+The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning.
> This feature is available from `specVersion: 1.0.0`
### Prune
-`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include:
+`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include:
1. `"never"`: No pruning of historical data; retains the entire history.
2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance.
@@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde
prune: auto
```
-> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities.
+> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities.
History as of a given block is required for:
-- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history
-- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block
-- Rewinding the subgraph back to that block
+- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history
+- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block
+- Rewinding the Subgraph back to that block
If historical data as of the block has been pruned, the above capabilities will not be available.
> Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data.
-For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings:
+For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings:
To retain a specific amount of historical data:
@@ -532,3 +532,18 @@ To preserve the complete history of entity states:
indexerHints:
prune: never
```
+
+## SpecVersion Releases
+
+| الاصدار | ملاحظات الإصدار |
+| :-----: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
+| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/ar/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/ar/subgraphs/developing/creating/unit-testing-framework.mdx
index e72d68bef7c8..44c9fedacb10 100644
--- a/website/src/pages/ar/subgraphs/developing/creating/unit-testing-framework.mdx
+++ b/website/src/pages/ar/subgraphs/developing/creating/unit-testing-framework.mdx
@@ -2,12 +2,12 @@
title: اختبار وحدة Framework
---
-Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs.
+Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs.
## Benefits of Using Matchstick
- It's written in Rust and optimized for high performance.
-- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more.
+- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more.
## Getting Started
@@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra
### Using Matchstick
-To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified).
+To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified).
### CLI options
@@ -113,7 +113,7 @@ graph test path/to/file.test.ts
```sh
-c, --coverage Run the tests in coverage mode
--d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph)
+-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph)
-f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image.
-h, --help Show usage information
-l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes)
@@ -145,17 +145,17 @@ libsFolder: path/to/libs
manifestPath: path/to/subgraph.yaml
```
-### Demo subgraph
+### Demo Subgraph
You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph)
### Video tutorials
-Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h)
+Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h)
## Tests structure
-_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_
+_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_
### describe()
@@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im
There we go - we've created our first test! 👏
-Now in order to run our tests you simply need to run the following in your subgraph root folder:
+Now in order to run our tests you simply need to run the following in your Subgraph root folder:
`graph test Gravity`
@@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri
Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file.
-NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow:
+NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow:
`.test.ts` file:
@@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index'
import { ipfs } from '@graphprotocol/graph-ts'
import { gravatarFromIpfs } from './utils'
-// Export ipfs.map() callback in order for matchstck to detect it
+// Export ipfs.map() callback in order for matchstick to detect it
export { processGravatar } from './utils'
test('ipfs.cat', () => {
@@ -1172,7 +1172,7 @@ templates:
network: mainnet
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/token-lock-wallet.ts
handler: handleMetadata
@@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => {
## Test Coverage
-Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests.
+Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests.
The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked.
@@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as
## مصادر إضافية
-For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_).
+For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_).
## Feedback
diff --git a/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx
index 4f7dcd3864e8..3b2b1bbc70ae 100644
--- a/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx
+++ b/website/src/pages/ar/subgraphs/developing/deploying/multiple-networks.mdx
@@ -1,12 +1,13 @@
---
title: Deploying a Subgraph to Multiple Networks
+sidebarTitle: Deploying to Multiple Networks
---
-This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/).
+This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/).
-## Deploying the subgraph to multiple networks
+## Deploying the Subgraph to multiple networks
-In some cases, you will want to deploy the same subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different.
+In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different.
### Using `graph-cli`
@@ -20,7 +21,7 @@ Options:
--network-file Networks config file path (default: "./networks.json")
```
-You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development.
+You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development.
> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks.
@@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit
> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option.
-Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`:
+Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`:
```yaml
# ...
@@ -96,7 +97,7 @@ yarn build --network sepolia
yarn build --network sepolia --network-file path/to/config
```
-The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this:
+The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this:
```yaml
# ...
@@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config
One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/).
-To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network:
+To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network:
```json
{
@@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional
}
```
-To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands:
+To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands:
```sh
# Mainnet:
@@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e
**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well.
-This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error.
+This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error.
-## Subgraph Studio subgraph archive policy
+## Subgraph Studio Subgraph archive policy
-A subgraph version in Studio is archived if and only if it meets the following criteria:
+A Subgraph version in Studio is archived if and only if it meets the following criteria:
- The version is not published to the network (or pending publish)
- The version was created 45 or more days ago
-- The subgraph hasn't been queried in 30 days
+- The Subgraph hasn't been queried in 30 days
-In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived.
+In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived.
-Every subgraph affected with this policy has an option to bring the version in question back.
+Every Subgraph affected with this policy has an option to bring the version in question back.
-## Checking subgraph health
+## Checking Subgraph health
-If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators.
+If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators.
-Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph:
+Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
```graphql
{
@@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of
}
```
-This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error.
+This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error.
diff --git a/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx
index d8880ef1a196..1e0826bfe148 100644
--- a/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx
+++ b/website/src/pages/ar/subgraphs/developing/deploying/using-subgraph-studio.mdx
@@ -2,23 +2,23 @@
title: Deploying Using Subgraph Studio
---
-Learn how to deploy your subgraph to Subgraph Studio.
+Learn how to deploy your Subgraph to Subgraph Studio.
-> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain.
+> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain.
## Subgraph Studio Overview
In [Subgraph Studio](https://thegraph.com/studio/), you can do the following:
-- View a list of subgraphs you've created
-- Manage, view details, and visualize the status of a specific subgraph
-- إنشاء وإدارة مفاتيح API الخاصة بك لـ subgraphs محددة
+- View a list of Subgraphs you've created
+- Manage, view details, and visualize the status of a specific Subgraph
+- Create and manage your API keys for specific Subgraphs
- Restrict your API keys to specific domains and allow only certain Indexers to query with them
-- Create your subgraph
-- Deploy your subgraph using The Graph CLI
-- Test your subgraph in the playground environment
-- Integrate your subgraph in staging using the development query URL
-- Publish your subgraph to The Graph Network
+- Create your Subgraph
+- Deploy your Subgraph using The Graph CLI
+- Test your Subgraph in the playground environment
+- Integrate your Subgraph in staging using the development query URL
+- Publish your Subgraph to The Graph Network
- Manage your billing
## Install The Graph CLI
@@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli
1. Open [Subgraph Studio](https://thegraph.com/studio/).
2. Connect your wallet to sign in.
- You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe.
-3. After you sign in, your unique deploy key will be displayed on your subgraph details page.
- - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised.
+3. After you sign in, your unique deploy key will be displayed on your Subgraph details page.
+ - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised.
-> Important: You need an API key to query subgraphs
+> Important: You need an API key to query Subgraphs
### How to Create a Subgraph in Subgraph Studio
@@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli
### توافق الـ Subgraph مع شبكة The Graph
-In order to be supported by Indexers on The Graph Network, subgraphs must:
-
-- Index a [supported network](/supported-networks/)
-- يجب ألا تستخدم أيًا من الميزات التالية:
- - ipfs.cat & ipfs.map
- - أخطاء غير فادحة
- - Grafting
+To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo.
## Initialize Your Subgraph
-Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command:
+Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command:
```bash
graph init
```
-You can find the `` value on your subgraph details page in Subgraph Studio, see image below:
+You can find the `` value on your Subgraph details page in Subgraph Studio, see image below:

-After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected.
+After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected.
## Graph Auth
-Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page.
+Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page.
Then, use the following command to authenticate from the CLI:
@@ -91,11 +85,11 @@ graph auth
## Deploying a Subgraph
-Once you are ready, you can deploy your subgraph to Subgraph Studio.
+Once you are ready, you can deploy your Subgraph to Subgraph Studio.
-> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network.
+> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network.
-Use the following CLI command to deploy your subgraph:
+Use the following CLI command to deploy your Subgraph:
```bash
graph deploy
@@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label.
## Testing Your Subgraph
-After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready.
+After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready.
-Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph.
+Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph.
## Publish Your Subgraph
-In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
+In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
## Versioning Your Subgraph with the CLI
-If you want to update your subgraph, you can do the following:
+If you want to update your Subgraph, you can do the following:
- You can deploy a new version to Studio using the CLI (it will only be private at this point).
- Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer).
-- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index.
+- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index.
-You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment.
+You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment.
-> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/).
+> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/).
## الأرشفة التلقائية لإصدارات الـ Subgraph
-Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio.
+Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio.
-> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived.
+> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived.

diff --git a/website/src/pages/ar/subgraphs/developing/developer-faq.mdx b/website/src/pages/ar/subgraphs/developing/developer-faq.mdx
index f0e9ba0cd865..016a7a8e5a04 100644
--- a/website/src/pages/ar/subgraphs/developing/developer-faq.mdx
+++ b/website/src/pages/ar/subgraphs/developing/developer-faq.mdx
@@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o
## Subgraph Related
-### 1. What is a subgraph?
+### 1. What is a Subgraph?
-A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query.
+A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query.
-### 2. What is the first step to create a subgraph?
+### 2. What is the first step to create a Subgraph?
-To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/).
+To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/).
-### 3. Can I still create a subgraph if my smart contracts don't have events?
+### 3. Can I still create a Subgraph if my smart contracts don't have events?
-It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data.
+It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data.
-If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower.
+If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower.
-### 4. Can I change the GitHub account associated with my subgraph?
+### 4. Can I change the GitHub account associated with my Subgraph?
-No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph.
+No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph.
-### 5. How do I update a subgraph on mainnet?
+### 5. How do I update a Subgraph on mainnet?
-You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on.
+You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on.
-### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying?
+### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying?
-يجب عليك إعادة نشر ال الفرعيةرسم بياني ، ولكن إذا لم يتغير الفرعيةرسم بياني (ID (IPFS hash ، فلن يضطر إلى المزامنة من البداية.
+You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning.
-### 7. How do I call a contract function or access a public state variable from my subgraph mappings?
+### 7. How do I call a contract function or access a public state variable from my Subgraph mappings?
Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state).
-### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings?
+### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings?
Not currently, as mappings are written in AssemblyScript.
@@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p
### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events?
-ضمن ال Subgraph ، تتم معالجة الأحداث دائمًا بالترتيب الذي تظهر به في الكتل ، بغض النظر عما إذا كان ذلك عبر عقود متعددة أم لا.
+Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not.
### 10. How are templates different from data sources?
-Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address.
+Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address.
Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates).
-### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`?
+### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`?
Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other.
@@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest
If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique.
-### 15. Can I delete my subgraph?
+### 15. Can I delete my Subgraph?
-Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph.
+Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph.
## Network Related
@@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul
Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks)
-### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync
+### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync
Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks)
-### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed?
+### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed?
نعم! جرب الأمر التالي ، مع استبدال "Organization / subgraphName" بالمؤسسة واسم الـ subgraph الخاص بك:
@@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... }
### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high?
-Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients.
+Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients.
## Miscellaneous
diff --git a/website/src/pages/ar/subgraphs/developing/introduction.mdx b/website/src/pages/ar/subgraphs/developing/introduction.mdx
index d3b71aaab704..946e62affbe7 100644
--- a/website/src/pages/ar/subgraphs/developing/introduction.mdx
+++ b/website/src/pages/ar/subgraphs/developing/introduction.mdx
@@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin
On The Graph, you can:
-1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/).
-2. Use GraphQL to query existing subgraphs.
+1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/).
+2. Use GraphQL to query existing Subgraphs.
### What is GraphQL?
-- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs.
+- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
### Developer Actions
-- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps.
-- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers.
-- Deploy, publish and signal your subgraphs within The Graph Network.
+- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps.
+- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers.
+- Deploy, publish and signal your Subgraphs within The Graph Network.
-### What are subgraphs?
+### What are Subgraphs?
-A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL.
+A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL.
-Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics.
+Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics.
diff --git a/website/src/pages/ar/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/managing/deleting-a-subgraph.mdx
index 5a4ac15e07fd..b8c2330ca49d 100644
--- a/website/src/pages/ar/subgraphs/developing/managing/deleting-a-subgraph.mdx
+++ b/website/src/pages/ar/subgraphs/developing/managing/deleting-a-subgraph.mdx
@@ -2,30 +2,30 @@
title: Deleting a Subgraph
---
-Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/).
+Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/).
-> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it.
+> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it.
## Step-by-Step
-1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/).
+1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/).
2. Click on the three-dots to the right of the "publish" button.
-3. Click on the option to "delete this subgraph":
+3. Click on the option to "delete this Subgraph":

-4. Depending on the subgraph's status, you will be prompted with various options.
+4. Depending on the Subgraph's status, you will be prompted with various options.
- - If the subgraph is not published, simply click “delete” and confirm.
- - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required.
+ - If the Subgraph is not published, simply click “delete” and confirm.
+ - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required.
-> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner.
+> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner.
### Important Reminders
-- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal.
-- Curators will not be able to signal on the subgraph anymore.
-- Curators that already signaled on the subgraph can withdraw their signal at an average share price.
-- Deleted subgraphs will show an error message.
+- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal.
+- Curators will not be able to signal on the Subgraph anymore.
+- Curators that already signaled on the Subgraph can withdraw their signal at an average share price.
+- Deleted Subgraphs will show an error message.
diff --git a/website/src/pages/ar/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/managing/transferring-a-subgraph.mdx
index 0fc6632cbc40..e80bde3fa6d2 100644
--- a/website/src/pages/ar/subgraphs/developing/managing/transferring-a-subgraph.mdx
+++ b/website/src/pages/ar/subgraphs/developing/managing/transferring-a-subgraph.mdx
@@ -2,18 +2,18 @@
title: Transferring a Subgraph
---
-Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network.
+Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network.
## Reminders
-- Whoever owns the NFT controls the subgraph.
-- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network.
-- You can easily move control of a subgraph to a multi-sig.
-- A community member can create a subgraph on behalf of a DAO.
+- Whoever owns the NFT controls the Subgraph.
+- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network.
+- You can easily move control of a Subgraph to a multi-sig.
+- A community member can create a Subgraph on behalf of a DAO.
## View Your Subgraph as an NFT
-To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**:
+To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**:
```
https://opensea.io/your-wallet-address
@@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres
## Step-by-Step
-To transfer ownership of a subgraph, do the following:
+To transfer ownership of a Subgraph, do the following:
1. Use the UI built into Subgraph Studio:

-2. Choose the address that you would like to transfer the subgraph to:
+2. Choose the address that you would like to transfer the Subgraph to:

diff --git a/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx
index dca943ad3152..2bc0ec5f514c 100644
--- a/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx
+++ b/website/src/pages/ar/subgraphs/developing/publishing/publishing-a-subgraph.mdx
@@ -1,10 +1,11 @@
---
title: Publishing a Subgraph to the Decentralized Network
+sidebarTitle: Publishing to the Decentralized Network
---
-Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network.
+Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network.
-When you publish a subgraph to the decentralized network, you make it available for:
+When you publish a Subgraph to the decentralized network, you make it available for:
- [Curators](/resources/roles/curating/) to begin curating it.
- [Indexers](/indexing/overview/) to begin indexing it.
@@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/).
1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard
2. Click on the **Publish** button
-3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/).
+3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/).
-All published versions of an existing subgraph can:
+All published versions of an existing Subgraph can:
- Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/).
-- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published.
+- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published.
-### Updating metadata for a published subgraph
+### Updating metadata for a published Subgraph
-- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio.
+- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio.
- Once you’ve saved your changes and published the updates, they will appear in Graph Explorer.
- It's important to note that this process will not create a new version since your deployment has not changed.
## Publishing from the CLI
-As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli).
+As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli).
1. Open the `graph-cli`.
2. Use the following commands: `graph codegen && graph build` then `graph publish`.
-3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice.
+3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice.

### Customizing your deployment
-You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags:
+You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags:
```
USAGE
@@ -61,33 +62,33 @@ FLAGS
```
-## Adding signal to your subgraph
+## Adding signal to your Subgraph
-Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph.
+Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph.
-- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled.
+- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled.
-- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md).
+- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md).
- Specific supported networks can be checked [here](/supported-networks/).
-> Adding signal to a subgraph which is not eligible for rewards will not attract additional Indexers.
+> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers.
>
-> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph.
+> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph.
-The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
+The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
-When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
+When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
-Indexers can find subgraphs to index based on curation signals they see in Graph Explorer.
+Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer.
-
+
-Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published.
+Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published.

-Alternatively, you can add GRT signal to a published subgraph from Graph Explorer.
+Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer.

diff --git a/website/src/pages/ar/subgraphs/developing/subgraphs.mdx b/website/src/pages/ar/subgraphs/developing/subgraphs.mdx
index b52ec5cd2843..b2d94218cd67 100644
--- a/website/src/pages/ar/subgraphs/developing/subgraphs.mdx
+++ b/website/src/pages/ar/subgraphs/developing/subgraphs.mdx
@@ -4,83 +4,83 @@ title: Subgraphs
## What is a Subgraph?
-A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL.
+A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL.
### Subgraph Capabilities
- **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3.
-- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/).
-- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer).
+- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/).
+- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer).
## Inside a Subgraph
-The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query.
+The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query.
-The **subgraph definition** consists of the following files:
+The **Subgraph definition** consists of the following files:
-- `subgraph.yaml`: Contains the subgraph manifest
+- `subgraph.yaml`: Contains the Subgraph manifest
-- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL
+- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL
- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema
-To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/).
+To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/).
## دورة حياة الـ Subgraph
-Here is a general overview of a subgraph’s lifecycle:
+Here is a general overview of a Subgraph’s lifecycle:

## Subgraph Development
-1. [Create a subgraph](/developing/creating-a-subgraph/)
-2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/)
-3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio)
-4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/)
-5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph)
+1. [Create a Subgraph](/developing/creating-a-subgraph/)
+2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/)
+3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio)
+4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/)
+5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph)
### Build locally
-Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs.
+Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs.
### Deploy to Subgraph Studio
-Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following:
+Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following:
-- Use its staging environment to index the deployed subgraph and make it available for review.
-- Verify that your subgraph doesn't have any indexing errors and works as expected.
+- Use its staging environment to index the deployed Subgraph and make it available for review.
+- Verify that your Subgraph doesn't have any indexing errors and works as expected.
### Publish to the Network
-When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network.
+When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network.
-- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers.
-- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT.
-- Published subgraphs have associated metadata, which provides other network participants with useful context and information.
+- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers.
+- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT.
+- Published Subgraphs have associated metadata, which provides other network participants with useful context and information.
### Add Curation Signal for Indexing
-Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph.
+Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph.
#### What is signal?
-- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it.
-- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume.
+- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it.
+- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume.
### Querying & Application Development
Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/).
-Learn more about [querying subgraphs](/subgraphs/querying/introduction/).
+Learn more about [querying Subgraphs](/subgraphs/querying/introduction/).
### Updating Subgraphs
-To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing.
+To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing.
-- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax.
-- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying.
+- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax.
+- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying.
### Deleting & Transferring Subgraphs
-If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/).
+If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/).
diff --git a/website/src/pages/ar/subgraphs/explorer.mdx b/website/src/pages/ar/subgraphs/explorer.mdx
index 512be28e8322..57d7712cc383 100644
--- a/website/src/pages/ar/subgraphs/explorer.mdx
+++ b/website/src/pages/ar/subgraphs/explorer.mdx
@@ -2,11 +2,11 @@
title: Graph Explorer
---
-Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer).
+Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer).
## نظره عامة
-Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile.
+Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile.
## Inside Explorer
@@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi
### Subgraphs Page
-After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following:
+After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following:
-- Your own finished subgraphs
+- Your own finished Subgraphs
- Subgraphs published by others
-- The exact subgraph you want (based on the date created, signal amount, or name).
+- The exact Subgraph you want (based on the date created, signal amount, or name).

-When you click into a subgraph, you will be able to do the following:
+When you click into a Subgraph, you will be able to do the following:
- Test queries in the playground and be able to leverage network details to make informed decisions.
-- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality.
+- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality.
- - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries.
+ - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries.

-On each subgraph’s dedicated page, you can do the following:
+On each Subgraph’s dedicated page, you can do the following:
-- أشر/الغي الإشارة على Subgraphs
+- Signal/Un-signal on Subgraphs
- اعرض المزيد من التفاصيل مثل المخططات و ال ID الحالي وبيانات التعريف الأخرى
-- بدّل بين الإصدارات وذلك لاستكشاف التكرارات السابقة ل subgraphs
-- استعلم عن subgraphs عن طريق GraphQL
-- اختبار subgraphs في playground
-- اعرض المفهرسين الذين يفهرسون Subgraphs معين
+- Switch versions to explore past iterations of the Subgraph
+- Query Subgraphs via GraphQL
+- Test Subgraphs in the playground
+- View the Indexers that are indexing on a certain Subgraph
- إحصائيات subgraphs (المخصصات ، المنسقين ، إلخ)
-- اعرض من قام بنشر ال Subgraphs
+- View the entity who published the Subgraph

@@ -53,7 +53,7 @@ On this page, you can see the following:
- Indexers who collected the most query fees
- Indexers with the highest estimated APR
-Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph.
+Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph.
### Participants Page
@@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every

-Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs.
+Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs.
-In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards.
+In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
**Specifics**
@@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s
- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
-- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing.
+- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated.
- Max Delegation Capacity - the maximum amount of delegated stake the Indexer can productively accept. An excess delegated stake cannot be used for allocations or rewards calculations.
- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time.
@@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici
#### 3. المفوضون Delegators
-Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed.
+Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed.
-- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve.
- - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on.
+- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve.
+ - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on.
- The bonding curve incentivizes Curators to curate the highest quality data sources.
In the The Curator table listed below you can see:
@@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ
A few key details to note:
-- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers.
-- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days).
+- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers.
+- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days).

@@ -178,15 +178,15 @@ In this section, you can view the following:
### تبويب ال Subgraphs
-In the Subgraphs tab, you’ll see your published subgraphs.
+In the Subgraphs tab, you’ll see your published Subgraphs.
-> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network.
+> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network.

### تبويب الفهرسة
-In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer.
+In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer.
هذا القسم سيتضمن أيضا تفاصيل حول صافي مكافآت المفهرس ورسوم الاستعلام الصافي الخاصة بك. سترى المقاييس التالية:
@@ -223,13 +223,13 @@ In the Delegators tab, you can find the details of your active and historical de
### تبويب التنسيق Curating
-في علامة التبويب Curation ، ستجد جميع ال subgraphs التي تشير إليها (مما يتيح لك تلقي رسوم الاستعلام). الإشارة تسمح للمنسقين التوضيح للمفهرسين ماهي ال subgraphs ذات الجودة العالية والموثوقة ، مما يشير إلى ضرورة فهرستها.
+In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
ضمن علامة التبويب هذه ، ستجد نظرة عامة حول:
-- جميع ال subgraphs التي تقوم بتنسيقها مع تفاصيل الإشارة
-- إجمالي الحصة لكل subgraph
-- مكافآت الاستعلام لكل subgraph
+- All the Subgraphs you're curating on with signal details
+- Share totals per Subgraph
+- Query rewards per Subgraph
- تحديث في تفاصيل التاريخ

diff --git a/website/src/pages/ar/subgraphs/guides/arweave.mdx b/website/src/pages/ar/subgraphs/guides/arweave.mdx
new file mode 100644
index 000000000000..4bb8883b4bd0
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/guides/arweave.mdx
@@ -0,0 +1,239 @@
+---
+title: Building Subgraphs on Arweave
+---
+
+> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs!
+
+In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain.
+
+## What is Arweave?
+
+The Arweave protocol allows developers to store data permanently and that is the main difference between Arweave and IPFS, where IPFS lacks the feature; permanence, and files stored on Arweave can't be changed or deleted.
+
+Arweave already has built numerous libraries for integrating the protocol in a number of different programming languages. For more information you can check:
+
+- [Arwiki](https://arwiki.wiki/#/en/main)
+- [Arweave Resources](https://www.arweave.org/build)
+
+## What are Arweave Subgraphs?
+
+The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/).
+
+[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet.
+
+## Building an Arweave Subgraph
+
+To be able to build and deploy Arweave Subgraphs, you need two packages:
+
+1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`.
+2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`.
+
+## Subgraph's components
+
+There are three components of a Subgraph:
+
+### 1. Manifest - `subgraph.yaml`
+
+Defines the data sources of interest, and how they should be processed. Arweave is a new kind of data source.
+
+### 2. Schema - `schema.graphql`
+
+Here you define which data you want to be able to query after indexing your Subgraph using GraphQL. This is actually similar to a model for an API, where the model defines the structure of a request body.
+
+The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema).
+
+### 3. AssemblyScript Mappings - `mapping.ts`
+
+This is the logic that determines how data should be retrieved and stored when someone interacts with the data sources you are listening to. The data gets translated and is stored based off the schema you have listed.
+
+During Subgraph development there are two key commands:
+
+```
+$ graph codegen # generates types from the schema file identified in the manifest
+$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder
+```
+
+## تعريف Subgraph Manifest
+
+The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph:
+
+```yaml
+specVersion: 1.3.0
+description: Arweave Blocks Indexing
+schema:
+ file: ./schema.graphql # link to the schema file
+dataSources:
+ - kind: arweave
+ name: arweave-blocks
+ network: arweave-mainnet # The Graph only supports Arweave Mainnet
+ source:
+ owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet
+ startBlock: 0 # set this to 0 to start indexing from chain genesis
+ mapping:
+ apiVersion: 0.0.9
+ language: wasm/assemblyscript
+ file: ./src/blocks.ts # link to the file with the Assemblyscript mappings
+ entities:
+ - Block
+ - Transaction
+ blockHandlers:
+ - handler: handleBlock # the function name in the mapping file
+ transactionHandlers:
+ - handler: handleTx # the function name in the mapping file
+```
+
+- Arweave Subgraphs introduce a new kind of data source (`arweave`)
+- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet`
+- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet
+
+Arweave data sources support two types of handlers:
+
+- `blockHandlers` - Run on every new Arweave block. No source.owner is required.
+- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner`
+
+> The source.owner can be the owner's address, or their Public Key.
+>
+> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users.
+>
+> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet.
+
+## تعريف المخطط
+
+Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
+
+## AssemblyScript Mappings
+
+The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/).
+
+Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/).
+
+```tsx
+class Block {
+ timestamp: u64
+ lastRetarget: u64
+ height: u64
+ indepHash: Bytes
+ nonce: Bytes
+ previousBlock: Bytes
+ diff: Bytes
+ hash: Bytes
+ txRoot: Bytes
+ txs: Bytes[]
+ walletList: Bytes
+ rewardAddr: Bytes
+ tags: Tag[]
+ rewardPool: Bytes
+ weaveSize: Bytes
+ blockSize: Bytes
+ cumulativeDiff: Bytes
+ hashListMerkle: Bytes
+ poa: ProofOfAccess
+}
+
+class Transaction {
+ format: u32
+ id: Bytes
+ lastTx: Bytes
+ owner: Bytes
+ tags: Tag[]
+ target: Bytes
+ quantity: Bytes
+ data: Bytes
+ dataSize: Bytes
+ dataRoot: Bytes
+ signature: Bytes
+ reward: Bytes
+}
+```
+
+Block handlers receive a `Block`, while transactions receive a `Transaction`.
+
+Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings).
+
+## Deploying an Arweave Subgraph in Subgraph Studio
+
+Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command.
+
+```bash
+graph deploy --access-token
+```
+
+## Querying an Arweave Subgraph
+
+The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information.
+
+## أمثلة على الـ Subgraphs
+
+Here is an example Subgraph for reference:
+
+- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions)
+
+## FAQ
+
+### Can a Subgraph index Arweave and other chains?
+
+No, a Subgraph can only support data sources from one chain/network.
+
+### Can I index the stored files on Arweave?
+
+Currently, The Graph is only indexing Arweave as a blockchain (its blocks and transactions).
+
+### Can I identify Bundlr bundles in my Subgraph?
+
+This is not currently supported.
+
+### How can I filter transactions to a specific account?
+
+The source.owner can be the user's public key or account address.
+
+### What is the current encryption format?
+
+Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/).
+
+The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`:
+
+```
+const base64Alphabet = [
+ "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M",
+ "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z",
+ "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m",
+ "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z",
+ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/"
+];
+
+const base64UrlAlphabet = [
+ "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M",
+ "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z",
+ "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m",
+ "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z",
+ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_"
+];
+
+function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string {
+ let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet;
+
+ let result = '', i: i32, l = bytes.length;
+ for (i = 2; i < l; i += 3) {
+ result += alphabet[bytes[i - 2] >> 2];
+ result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)];
+ result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)];
+ result += alphabet[bytes[i] & 0x3F];
+ }
+ if (i === l + 1) { // 1 octet yet to write
+ result += alphabet[bytes[i - 2] >> 2];
+ result += alphabet[(bytes[i - 2] & 0x03) << 4];
+ if (!urlSafe) {
+ result += "==";
+ }
+ }
+ if (!urlSafe && i === l) { // 2 octets yet to write
+ result += alphabet[bytes[i - 2] >> 2];
+ result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)];
+ result += alphabet[(bytes[i - 1] & 0x0F) << 2];
+ if (!urlSafe) {
+ result += "=";
+ }
+ }
+ return result;
+}
+```
diff --git a/website/src/pages/ar/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/ar/subgraphs/guides/contract-analyzer.mdx
new file mode 100644
index 000000000000..84aeda12e0fc
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/guides/contract-analyzer.mdx
@@ -0,0 +1,117 @@
+---
+title: Smart Contract Analysis with Cana CLI
+---
+
+Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains.
+
+## نظره عامة
+
+**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more.
+
+### Key Features
+
+With Cana CLI, you can:
+
+- Detect deployment blocks
+- Verify source code
+- Extract ABIs & event signatures
+- Identify proxy and implementation contracts
+- Support multiple chains
+
+### Prerequisites
+
+Before installing Cana CLI, make sure you have:
+
+- [Node.js v16+](https://nodejs.org/en)
+- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install)
+- Block explorer API keys
+
+### Installation & Setup
+
+1. Install Cana CLI
+
+Use npm to install it globally:
+
+```bash
+npm install -g contract-analyzer
+```
+
+2. Configure Cana CLI
+
+Set up a blockchain environment for analysis:
+
+```bash
+cana setup
+```
+
+During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL.
+
+After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use.
+
+### Steps: Using Cana CLI for Smart Contract Analysis
+
+#### 1. Select a Chain
+
+Cana CLI supports multiple EVM-compatible chains.
+
+For a list of chains added run this command:
+
+```bash
+cana chains
+```
+
+Then select a chain with this command:
+
+```bash
+cana chains --switch
+```
+
+Once a chain is selected, all subsequent contract analyses will continue on that chain.
+
+#### 2. Basic Contract Analysis
+
+Run the following command to analyze a contract:
+
+```bash
+cana analyze 0xContractAddress
+```
+
+or
+
+```bash
+cana -a 0xContractAddress
+```
+
+This command fetches and displays essential contract information in the terminal using a clear, organized format.
+
+#### 3. Understanding the Output
+
+Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved:
+
+```
+contracts-analyzed/
+└── ContractName_chainName_YYYY-MM-DD/
+ ├── contract/ # Folder for individual contract files
+ ├── abi.json # Contract ABI
+ └── event-information.json # Event signatures and examples
+```
+
+This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development.
+
+#### 4. Chain Management
+
+Add and manage chains:
+
+```bash
+cana setup # Add a new chain
+cana chains # List configured chains
+cana chains -s # Switch chains
+```
+
+### Troubleshooting
+
+Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions.
+
+### Conclusion
+
+With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease.
diff --git a/website/src/pages/ar/subgraphs/guides/enums.mdx b/website/src/pages/ar/subgraphs/guides/enums.mdx
new file mode 100644
index 000000000000..846faecc1706
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/guides/enums.mdx
@@ -0,0 +1,274 @@
+---
+title: Categorize NFT Marketplaces Using Enums
+---
+
+Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces.
+
+## What are Enums?
+
+Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values.
+
+### Example of Enums in Your Schema
+
+If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned.
+
+You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity.
+
+Here's what an enum definition might look like in your schema, based on the example above:
+
+```graphql
+enum TokenStatus {
+ OriginalOwner
+ SecondOwner
+ ThirdOwner
+}
+```
+
+This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity.
+
+To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types).
+
+## Benefits of Using Enums
+
+- **Clarity:** Enums provide meaningful names for values, making data easier to understand.
+- **Validation:** Enums enforce strict value definitions, preventing invalid data entries.
+- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner.
+
+### Without Enums
+
+If you choose to define the type as a string instead of using an Enum, your code might look like this:
+
+```graphql
+type Token @entity {
+ id: ID!
+ tokenId: BigInt!
+ owner: Bytes! # Owner of the token
+ tokenStatus: String! # String field to track token status
+ timestamp: BigInt!
+}
+```
+
+In this schema, `TokenStatus` is a simple string with no specific, allowed values.
+
+#### Why is this a problem?
+
+- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set.
+- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable.
+
+### With Enums
+
+Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used.
+
+Enums provide type safety, minimize typo risks, and ensure consistent and reliable results.
+
+## Defining Enums for NFT Marketplaces
+
+> Note: The following guide uses the CryptoCoven NFT smart contract.
+
+To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema:
+
+```gql
+# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint)
+enum Marketplace {
+ OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace
+ OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace
+ SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace
+ LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace
+ # ...and other marketplaces
+}
+```
+
+## Using Enums for NFT Marketplaces
+
+Once defined, enums can be used throughout your Subgraph to categorize transactions or events.
+
+For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum.
+
+### Implementing a Function for NFT Marketplaces
+
+Here's how you can implement a function to retrieve the marketplace name from the enum as a string:
+
+```ts
+export function getMarketplaceName(marketplace: Marketplace): string {
+ // Using if-else statements to map the enum value to a string
+ if (marketplace === Marketplace.OpenSeaV1) {
+ return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation
+ } else if (marketplace === Marketplace.OpenSeaV2) {
+ return 'OpenSeaV2'
+ } else if (marketplace === Marketplace.SeaPort) {
+ return 'SeaPort' // If the marketplace is SeaPort, return its string representation
+ } else if (marketplace === Marketplace.LooksRare) {
+ return 'LooksRare' // If the marketplace is LooksRare, return its string representation
+ // ... and other market places
+ }
+}
+```
+
+## Best Practices for Using Enums
+
+- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability.
+- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth.
+- **Documentation:** Add comments to enum to clarify their purpose and usage.
+
+## Using Enums in Queries
+
+Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values.
+
+**Specifics**
+
+- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces.
+- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate.
+
+### Sample Queries
+
+#### Query 1: Account With The Highest NFT Marketplace Interactions
+
+This query does the following:
+
+- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity.
+- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response.
+
+```gql
+{
+ accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) {
+ id
+ sendCount
+ receiveCount
+ totalSpent
+ uniqueMarketplacesCount
+ marketplaces {
+ marketplace # This field returns the enum value representing the marketplace
+ }
+ }
+}
+```
+
+#### Returns
+
+This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity:
+
+```gql
+{
+ "data": {
+ "accounts": [
+ {
+ "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0",
+ "sendCount": "44",
+ "receiveCount": "44",
+ "totalSpent": "1197500000000000000",
+ "uniqueMarketplacesCount": "7",
+ "marketplaces": [
+ {
+ "marketplace": "OpenSeaV1"
+ },
+ {
+ "marketplace": "OpenSeaV2"
+ },
+ {
+ "marketplace": "GenieSwap"
+ },
+ {
+ "marketplace": "CryptoCoven"
+ },
+ {
+ "marketplace": "Unknown"
+ },
+ {
+ "marketplace": "LooksRare"
+ },
+ {
+ "marketplace": "NFTX"
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+#### Query 2: Most Active Marketplace for CryptoCoven transactions
+
+This query does the following:
+
+- It identifies the marketplace with the highest volume of CryptoCoven transactions.
+- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data.
+
+```gql
+{
+ marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) {
+ marketplace
+ transactionCount
+ }
+}
+```
+
+#### Result 2
+
+The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type:
+
+```gql
+{
+ "data": {
+ "marketplaceInteractions": [
+ {
+ "marketplace": "Unknown",
+ "transactionCount": "222"
+ }
+ ]
+ }
+}
+```
+
+#### Query 3: Marketplace Interactions with High Transaction Counts
+
+This query does the following:
+
+- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces.
+- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy.
+
+```gql
+{
+ marketplaceInteractions(
+ first: 4
+ orderBy: transactionCount
+ orderDirection: desc
+ where: { transactionCount_gt: "100", marketplace_not: "Unknown" }
+ ) {
+ marketplace
+ transactionCount
+ }
+}
+```
+
+#### Result 3
+
+Expected output includes the marketplaces that meet the criteria, each represented by an enum value:
+
+```gql
+{
+ "data": {
+ "marketplaceInteractions": [
+ {
+ "marketplace": "NFTX",
+ "transactionCount": "201"
+ },
+ {
+ "marketplace": "OpenSeaV1",
+ "transactionCount": "148"
+ },
+ {
+ "marketplace": "CryptoCoven",
+ "transactionCount": "117"
+ },
+ {
+ "marketplace": "OpenSeaV1",
+ "transactionCount": "111"
+ }
+ ]
+ }
+}
+```
+
+## مصادر إضافية
+
+For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums).
diff --git a/website/src/pages/ar/subgraphs/guides/grafting.mdx b/website/src/pages/ar/subgraphs/guides/grafting.mdx
new file mode 100644
index 000000000000..4b7dad1a54d9
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/guides/grafting.mdx
@@ -0,0 +1,202 @@
+---
+title: Replace a Contract and Keep its History With Grafting
+---
+
+In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs.
+
+## What is Grafting?
+
+Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch.
+
+The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways:
+
+- يضيف أو يزيل أنواع الكيانات
+- يزيل الصفات من أنواع الكيانات
+- It adds nullable attributes to entity types
+- It turns non-nullable attributes into nullable attributes
+- It adds values to enums
+- It adds or removes interfaces
+- يغير للكيانات التي يتم تنفيذ الواجهة لها
+
+For more information, you can check:
+
+- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs)
+
+In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract.
+
+## Important Note on Grafting When Upgrading to the Network
+
+> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network
+
+### Why Is This Important?
+
+Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio.
+
+### Best Practices
+
+**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected.
+
+**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data.
+
+By adhering to these guidelines, you minimize risks and ensure a smoother migration process.
+
+## Building an Existing Subgraph
+
+Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided:
+
+- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial)
+
+> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit).
+
+## تعريف Subgraph Manifest
+
+The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use:
+
+```yaml
+specVersion: 1.3.0
+schema:
+ file: ./schema.graphql
+dataSources:
+ - kind: ethereum
+ name: Lock
+ network: sepolia
+ source:
+ address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63'
+ abi: Lock
+ startBlock: 5955690
+ mapping:
+ kind: ethereum/events
+ apiVersion: 0.0.9
+ language: wasm/assemblyscript
+ entities:
+ - Withdrawal
+ abis:
+ - name: Lock
+ file: ./abis/Lock.json
+ eventHandlers:
+ - event: Withdrawal(uint256,uint256)
+ handler: handleWithdrawal
+ file: ./src/lock.ts
+```
+
+- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract
+- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia`
+- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted.
+
+## Grafting Manifest Definition
+
+Grafting requires adding two new items to the original Subgraph manifest:
+
+```yaml
+---
+features:
+ - grafting # feature name
+graft:
+ base: Qm... # Subgraph ID of base Subgraph
+ block: 5956000 # block number
+```
+
+- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features).
+- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on.
+
+The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting
+
+## Deploying the Base Subgraph
+
+1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example`
+2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo
+3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground
+
+```graphql
+{
+ withdrawals(first: 5) {
+ id
+ amount
+ when
+ }
+}
+```
+
+It returns something like this:
+
+```
+{
+ "data": {
+ "withdrawals": [
+ {
+ "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000",
+ "amount": "0",
+ "when": "1716394824"
+ },
+ {
+ "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000",
+ "amount": "0",
+ "when": "1716394848"
+ }
+ ]
+ }
+}
+```
+
+Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting.
+
+## Deploying the Grafting Subgraph
+
+The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc.
+
+1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement`
+2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio.
+3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo
+4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground
+
+```graphql
+{
+ withdrawals(first: 5) {
+ id
+ amount
+ when
+ }
+}
+```
+
+It should return the following:
+
+```
+{
+ "data": {
+ "withdrawals": [
+ {
+ "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000",
+ "amount": "0",
+ "when": "1716394824"
+ },
+ {
+ "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000",
+ "amount": "0",
+ "when": "1716394848"
+ },
+ {
+ "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000",
+ "amount": "0",
+ "when": "1716429732"
+ }
+ ]
+ }
+}
+```
+
+You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph.
+
+Congrats! You have successfully grafted a Subgraph onto another Subgraph.
+
+## مصادر إضافية
+
+If you want more experience with grafting, here are a few examples for popular contracts:
+
+- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml)
+- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml)
+- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml),
+
+To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results
+
+> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/)
diff --git a/website/src/pages/ar/subgraphs/guides/near.mdx b/website/src/pages/ar/subgraphs/guides/near.mdx
new file mode 100644
index 000000000000..04daec8b6ac7
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/guides/near.mdx
@@ -0,0 +1,283 @@
+---
+title: بناء Subgraphs على NEAR
+---
+
+This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/).
+
+## ما هو NEAR؟
+
+[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information.
+
+## What are NEAR Subgraphs?
+
+The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts.
+
+Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs:
+
+- معالجات الكتل(Block handlers): يتم تشغيلها على كل كتلة جديدة
+- معالجات الاستلام (Receipt handlers): يتم تشغيلها في كل مرة يتم فيها تنفيذ رسالة على حساب محدد
+
+[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt):
+
+> الاستلام (Receipt) هو الكائن الوحيد القابل للتنفيذ في النظام. عندما نتحدث عن "معالجة الإجراء" على منصة NEAR ، فإن هذا يعني في النهاية "تطبيق الاستلامات" في مرحلة ما.
+
+## بناء NEAR Subgraph
+
+`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs.
+
+`@graphprotocol/graph-ts` is a library of Subgraph-specific types.
+
+NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`.
+
+> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum.
+
+There are three aspects of Subgraph definition:
+
+**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source.
+
+**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema).
+
+**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality.
+
+During Subgraph development there are two key commands:
+
+```bash
+$ graph codegen # generates types from the schema file identified in the manifest
+$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder
+```
+
+### تعريف Subgraph Manifest
+
+The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph:
+
+```yaml
+specVersion: 1.3.0
+schema:
+ file: ./src/schema.graphql # link to the schema file
+dataSources:
+ - kind: near
+ network: near-mainnet
+ source:
+ account: app.good-morning.near # This data source will monitor this account
+ startBlock: 10662188 # Required for NEAR
+ mapping:
+ apiVersion: 0.0.9
+ language: wasm/assemblyscript
+ blockHandlers:
+ - handler: handleNewBlock # the function name in the mapping file
+ receiptHandlers:
+ - handler: handleReceipt # the function name in the mapping file
+ file: ./src/mapping.ts # link to the file with the Assemblyscript mappings
+```
+
+- NEAR Subgraphs introduce a new `kind` of data source (`near`)
+- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet`
+- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account.
+- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted.
+
+```yaml
+accounts:
+ prefixes:
+ - app
+ - good
+ suffixes:
+ - morning.near
+ - morning.testnet
+```
+
+مصادر بيانات NEAR تدعم نوعين من المعالجات:
+
+- `blockHandlers`: run on every new NEAR block. No `source.account` is required.
+- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources).
+
+### تعريف المخطط
+
+Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
+
+### AssemblyScript Mappings
+
+The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/).
+
+NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/).
+
+```typescript
+
+class ExecutionOutcome {
+ gasBurnt: u64,
+ blockHash: Bytes,
+ id: Bytes,
+ logs: Array,
+ receiptIds: Array,
+ tokensBurnt: BigInt,
+ executorId: string,
+ }
+
+class ActionReceipt {
+ predecessorId: string,
+ receiverId: string,
+ id: CryptoHash,
+ signerId: string,
+ gasPrice: BigInt,
+ outputDataReceivers: Array,
+ inputDataIds: Array,
+ actions: Array,
+ }
+
+class BlockHeader {
+ height: u64,
+ prevHeight: u64,// Always zero when version < V3
+ epochId: Bytes,
+ nextEpochId: Bytes,
+ chunksIncluded: u64,
+ hash: Bytes,
+ prevHash: Bytes,
+ timestampNanosec: u64,
+ randomValue: Bytes,
+ gasPrice: BigInt,
+ totalSupply: BigInt,
+ latestProtocolVersion: u32,
+ }
+
+class ChunkHeader {
+ gasUsed: u64,
+ gasLimit: u64,
+ shardId: u64,
+ chunkHash: Bytes,
+ prevBlockHash: Bytes,
+ balanceBurnt: BigInt,
+ }
+
+class Block {
+ author: string,
+ header: BlockHeader,
+ chunks: Array,
+ }
+
+class ReceiptWithOutcome {
+ outcome: ExecutionOutcome,
+ receipt: ActionReceipt,
+ block: Block,
+ }
+```
+
+These types are passed to block & receipt handlers:
+
+- Block handlers will receive a `Block`
+- Receipt handlers will receive a `ReceiptWithOutcome`
+
+Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution.
+
+This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs.
+
+## نشر NEAR Subgraph
+
+Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released).
+
+Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names:
+
+- `near-mainnet`
+- `near-testnet`
+
+More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/).
+
+As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph".
+
+Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command:
+
+```sh
+$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
+$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
+```
+
+The node configuration will depend on where the Subgraph is being deployed.
+
+### Subgraph Studio
+
+```sh
+graph auth
+graph deploy
+```
+
+### Local Graph Node (based on default configuration)
+
+```sh
+graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001
+```
+
+Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself:
+
+```graphql
+{
+ _meta {
+ block {
+ number
+ }
+ }
+}
+```
+
+### Indexing NEAR with a Local Graph Node
+
+تشغيل Graph Node التي تقوم بفهرسة NEAR لها المتطلبات التشغيلية التالية:
+
+- NEAR Indexer Framework مع أجهزة Firehose
+- مكونات NEAR Firehose
+- تكوين Graph Node مع Firehose endpoint
+
+سوف نقدم المزيد من المعلومات حول تشغيل المكونات أعلاه قريبًا.
+
+## الاستعلام عن NEAR Subgraph
+
+The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information.
+
+## أمثلة على الـ Subgraphs
+
+Here are some example Subgraphs for reference:
+
+[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks)
+
+[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts)
+
+## FAQ
+
+### How does the beta work?
+
+NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments!
+
+### Can a Subgraph index both NEAR and EVM chains?
+
+No, a Subgraph can only support data sources from one chain/network.
+
+### Can Subgraphs react to more specific triggers?
+
+حاليًا ، يتم دعم مشغلات الكتلة(Block) والاستلام(Receipt). نحن نبحث في مشغلات استدعاءات الدوال لحساب محدد. نحن مهتمون أيضًا بدعم مشغلات الأحداث ، بمجرد حصول NEAR على دعم محلي للأحداث.
+
+### Will receipt handlers trigger for accounts and their sub-accounts?
+
+If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts:
+
+```yaml
+accounts:
+ suffixes:
+ - mintbase1.near
+```
+
+### Can NEAR Subgraphs make view calls to NEAR accounts during mappings?
+
+هذا غير مدعوم. نحن بصدد تقييم ما إذا كانت هذه الميزة مطلوبة للفهرسة.
+
+### Can I use data source templates in my NEAR Subgraph?
+
+هذا غير مدعوم حاليا. نحن بصدد تقييم ما إذا كانت هذه الميزة مطلوبة للفهرسة.
+
+### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph?
+
+Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced.
+
+### My question hasn't been answered, where can I get more help building NEAR Subgraphs?
+
+If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com.
+
+## المراجع
+
+- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton)
diff --git a/website/src/pages/ar/subgraphs/guides/polymarket.mdx b/website/src/pages/ar/subgraphs/guides/polymarket.mdx
new file mode 100644
index 000000000000..74efe387b0d7
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/guides/polymarket.mdx
@@ -0,0 +1,148 @@
+---
+title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph
+sidebarTitle: Query Polymarket Data
+---
+
+Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains.
+
+## Polymarket Subgraph on Graph Explorer
+
+You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query.
+
+
+
+## How to use the Visual Query Editor
+
+The visual query editor helps you test sample queries from your Subgraph.
+
+You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want.
+
+### Example Query: Get the top 5 highest payouts from Polymarket
+
+```
+{
+ redemptions(orderBy: payout, orderDirection: desc, first: 5) {
+ payout
+ redeemer
+ id
+ timestamp
+ }
+}
+```
+
+### Example output
+
+```
+{
+ "data": {
+ "redemptions": [
+ {
+ "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b",
+ "payout": "6274509531681",
+ "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c",
+ "timestamp": "1722929672"
+ },
+ {
+ "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7",
+ "payout": "2246253575996",
+ "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c",
+ "timestamp": "1726701528"
+ },
+ {
+ "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26",
+ "payout": "2135448291991",
+ "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689",
+ "timestamp": "1704932625"
+ },
+ {
+ "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa",
+ "payout": "1917395333835",
+ "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c",
+ "timestamp": "1726701528"
+ },
+ {
+ "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30",
+ "payout": "1862505580000",
+ "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c",
+ "timestamp": "1722929866"
+ }
+ ]
+ }
+}
+```
+
+## Polymarket's GraphQL Schema
+
+The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql).
+
+### Polymarket Subgraph Endpoint
+
+https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp
+
+The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer).
+
+
+
+## How to Get your own API Key
+
+1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet
+2. Go to https://thegraph.com/studio/apikeys/ to create an API key
+
+You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket.
+
+100k queries per month are free which is perfect for your side project!
+
+## Additional Polymarket Subgraphs
+
+- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one)
+- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one)
+- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one)
+- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one)
+
+## How to Query with the API
+
+You can pass any GraphQL query to the Polymarket endpoint and receive data in json format.
+
+This following code example will return the exact same output as above.
+
+### Sample Code from node.js
+
+```
+const axios = require('axios');
+
+const graphqlQuery = `{
+ positions(first: 5) {
+ condition
+ outcomeIndex
+ }
+};
+
+const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp'
+
+const graphQLRequest = {
+ method: 'post',
+ url: queryUrl,
+ data: {
+ query: graphqlQuery,
+ },
+};
+
+// Send the GraphQL query
+axios(graphQLRequest)
+ .then((response) => {
+ // Handle the response here
+ const data = response.data.data
+ console.log(data)
+
+ })
+ .catch((error) => {
+ // Handle any errors
+ console.error(error);
+ });
+```
+
+### Additional resources
+
+For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/).
+
+To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/).
diff --git a/website/src/pages/ar/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/ar/subgraphs/guides/secure-api-keys-nextjs.mdx
new file mode 100644
index 000000000000..21ac0b74d31d
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/guides/secure-api-keys-nextjs.mdx
@@ -0,0 +1,123 @@
+---
+title: How to Secure API Keys Using Next.js Server Components
+---
+
+## نظره عامة
+
+We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key).
+
+In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend.
+
+### Caveats
+
+- Next.js server components do not protect API keys from being drained using denial of service attacks.
+- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections.
+- Next.js server components introduce centralization risks as the server can go down.
+
+### Why It's Needed
+
+In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side.
+
+### Using client-side rendering to query a Subgraph
+
+
+
+### Prerequisites
+
+- An API key from [Subgraph Studio](https://thegraph.com/studio)
+- Basic knowledge of Next.js and React.
+- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app).
+
+## Step-by-Step Cookbook
+
+### Step 1: Set Up Environment Variables
+
+1. In our Next.js project root, create a `.env.local` file.
+2. Add our API key: `API_KEY=`.
+
+### Step 2: Create a Server Component
+
+1. In our `components` directory, create a new file, `ServerComponent.js`.
+2. Use the provided example code to set up the server component.
+
+### Step 3: Implement Server-Side API Request
+
+In `ServerComponent.js`, add the following code:
+
+```javascript
+const API_KEY = process.env.API_KEY
+
+export default async function ServerComponent() {
+ const response = await fetch(
+ `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`,
+ {
+ method: 'POST',
+ headers: {
+ 'Content-Type': 'application/json',
+ },
+ body: JSON.stringify({
+ query: /* GraphQL */ `
+ {
+ factories(first: 5) {
+ id
+ poolCount
+ txCount
+ totalVolumeUSD
+ }
+ }
+ `,
+ }),
+ },
+ )
+
+ const responseData = await response.json()
+ const data = responseData.data
+
+ return (
+
+
Server Component
+ {data ? (
+
+ {data.factories.map((factory) => (
+
+
ID: {factory.id}
+
Pool Count: {factory.poolCount}
+
Transaction Count: {factory.txCount}
+
Total Volume USD: {factory.totalVolumeUSD}
+
+ ))}
+
+ ) : (
+
Loading data...
+ )}
+
+ )
+}
+```
+
+### Step 4: Use the Server Component
+
+1. In our page file (e.g., `pages/index.js`), import `ServerComponent`.
+2. Render the component:
+
+```javascript
+import ServerComponent from './components/ServerComponent'
+
+export default function Home() {
+ return (
+
+
+
+ )
+}
+```
+
+### Step 5: Run and Test Our Dapp
+
+Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key.
+
+
+
+### Conclusion
+
+By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further.
diff --git a/website/src/pages/ar/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/ar/subgraphs/guides/subgraph-composition.mdx
new file mode 100644
index 000000000000..080de99b5ba1
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/guides/subgraph-composition.mdx
@@ -0,0 +1,132 @@
+---
+title: Aggregate Data Using Subgraph Composition
+sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs
+---
+
+Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it.
+
+Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation.
+
+## مقدمة
+
+Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset.
+
+### Benefits of Composition
+
+Subgraph composition is a powerful feature for scaling, allowing you to:
+
+- Reuse, mix, and combine existing data
+- Streamline development and queries
+- Use multiple data sources (up to five source Subgraphs)
+- Speed up your Subgraph's syncing speed
+- Handle errors and optimize the resync
+
+## Architecture Overview
+
+The setup for this example involves two Subgraphs:
+
+1. **Source Subgraph**: Tracks event data as entities.
+2. **Dependent Subgraph**: Uses the source Subgraph as a data source.
+
+You can find these in the `source` and `dependent` directories.
+
+- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts.
+- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers.
+
+While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature.
+
+## Prerequisites
+
+### Source Subgraphs
+
+- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs)
+- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0
+- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed
+- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of
+- Source Subgraphs cannot use grafting on top of existing entities
+- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly
+
+### Composed Subgraphs
+
+- You can only compose up to a **maximum of 5 source Subgraphs**
+- Composed Subgraphs can only use **datasources from the same chain**
+- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time
+- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly
+- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph)
+
+Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs
+
+## Get Started
+
+The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph.
+
+### Specifics
+
+- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts.
+- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality.
+- Each source Subgraph is optimized with a specific entity.
+- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance.
+
+### Step 1. Deploy Block Time Source Subgraph
+
+This first source Subgraph calculates the block time for each block.
+
+- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined.
+- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly.
+
+To deploy this Subgraph locally, run the following commands:
+
+```bash
+npm install
+npm run codegen
+npm run build
+npm run create-local
+npm run deploy-local
+```
+
+### Step 2. Deploy Block Cost Source Subgraph
+
+This second source Subgraph indexes the cost of each block.
+
+#### Key Functions
+
+- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields.
+- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly.
+
+To deploy this Subgraph locally, run the same commands as above.
+
+### Step 3. Define Block Size in Source Subgraph
+
+This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above.
+
+#### Key Functions
+
+- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size.
+- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly.
+
+### Step 4. Combine Into Block Stats Subgraph
+
+This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above.
+
+> Note:
+>
+> - Any change to a source Subgraph will likely generate a new deployment ID.
+> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes.
+> - All source Subgraphs should be deployed before the composed Subgraph is deployed.
+
+#### Key Functions
+
+- It provides a consolidated data model that encompasses all relevant block metrics.
+- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses.
+
+## Key Takeaways
+
+- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs.
+- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph.
+- This feature unlocks scalability, simplifying both development and maintenance efficiency.
+
+## مصادر إضافية
+
+- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph).
+- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/).
+- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations).
diff --git a/website/src/pages/ar/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/ar/subgraphs/guides/subgraph-debug-forking.mdx
new file mode 100644
index 000000000000..364fb8ce4d9c
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/guides/subgraph-debug-forking.mdx
@@ -0,0 +1,101 @@
+---
+title: Quick and Easy Subgraph Debugging Using Forks
+---
+
+As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging!
+
+## حسنا، ما هو؟
+
+**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one).
+
+In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_.
+
+## ماذا؟! كيف؟
+
+When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_.
+
+In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state.
+
+## من فضلك ، أرني بعض الأكواد!
+
+To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract.
+
+Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever:
+
+```tsx
+export function handleNewGravatar(event: NewGravatar): void {
+ let gravatar = new Gravatar(event.params.id.toHex().toString())
+ gravatar.owner = event.params.owner
+ gravatar.displayName = event.params.displayName
+ gravatar.imageUrl = event.params.imageUrl
+ gravatar.save()
+}
+
+export function handleUpdatedGravatar(event: UpdatedGravatar): void {
+ let gravatar = Gravatar.load(event.params.id.toI32().toString())
+ if (gravatar == null) {
+ log.critical('Gravatar not found!', [])
+ return
+ }
+ gravatar.owner = event.params.owner
+ gravatar.displayName = event.params.displayName
+ gravatar.imageUrl = event.params.imageUrl
+ gravatar.save()
+}
+```
+
+Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error.
+
+الطريقة المعتادة لمحاولة الإصلاح هي:
+
+1. إجراء تغيير في مصدر الـ mappings ، والذي تعتقد أنه سيحل المشكلة (وأنا أعلم أنه لن يحلها).
+2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node).
+3. الانتظار حتى تتم المزامنة.
+4. إذا حدثت المشكلة مرة أخرى ، فارجع إلى 1!
+
+It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._
+
+Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks:
+
+0. Spin-up a local Graph Node with the **_appropriate fork-base_** set.
+1. قم بإجراء تغيير في مصدر الـ mappings ، والذي تعتقد أنه سيحل المشكلة.
+2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**.
+3. إذا حدثت المشكلة مرة أخرى ، فارجع إلى 1!
+
+الآن ، قد يكون لديك سؤالان:
+
+1. ماهو fork-base؟؟؟
+2. ما الذي نقوم بتفريعه (Forking)؟!
+
+وأنا أجيب:
+
+1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store.
+2. الـتفريع سهل ، فلا داعي للقلق:
+
+```bash
+$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020
+```
+
+Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork!
+
+لذلك ، هذا ما أفعله:
+
+1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/).
+
+```
+$ cargo run -p graph-node --release -- \
+ --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \
+ --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \
+ --ipfs 127.0.0.1:5001
+ --fork-base https://api.thegraph.com/subgraphs/id/
+```
+
+2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex.
+3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`:
+
+```bash
+$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020
+```
+
+4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working.
+5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho)
diff --git a/website/src/pages/ar/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/ar/subgraphs/guides/subgraph-uncrashable.mdx
new file mode 100644
index 000000000000..a08e2a7ad8c9
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/guides/subgraph-uncrashable.mdx
@@ -0,0 +1,29 @@
+---
+title: Safe Subgraph Code Generator
+---
+
+[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent.
+
+## Why integrate with Subgraph Uncrashable?
+
+- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity.
+
+- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic.
+
+- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy.
+
+**Key Features**
+
+- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification.
+
+- The framework also includes a way (via the config file) to create custom, but safe, setter functions for groups of entity variables. This way it is impossible for the user to load/use a stale graph entity and it is also impossible to forget to save or set a variable that is required by the function.
+
+- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy.
+
+Subgraph Uncrashable can be run as an optional flag using the Graph CLI codegen command.
+
+```sh
+graph codegen -u [options] []
+```
+
+Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs.
diff --git a/website/src/pages/ar/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/ar/subgraphs/guides/transfer-to-the-graph.mdx
new file mode 100644
index 000000000000..4be3dcedffe8
--- /dev/null
+++ b/website/src/pages/ar/subgraphs/guides/transfer-to-the-graph.mdx
@@ -0,0 +1,104 @@
+---
+title: Transfer to The Graph
+---
+
+Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/).
+
+## Benefits of Switching to The Graph
+
+- Use the same Subgraph that your apps already use with zero-downtime migration.
+- Increase reliability from a global network supported by 100+ Indexers.
+- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team.
+
+## Upgrade Your Subgraph to The Graph in 3 Easy Steps
+
+1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment)
+2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio)
+3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network)
+
+## 1. Set Up Your Studio Environment
+
+### Create a Subgraph in Subgraph Studio
+
+- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet.
+- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name".
+
+> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly.
+
+### Install the Graph CLI
+
+You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version.
+
+On your local machine, run the following command:
+
+Using [npm](https://www.npmjs.com/):
+
+```sh
+npm install -g @graphprotocol/graph-cli@latest
+```
+
+Use the following command to create a Subgraph in Studio using the CLI:
+
+```sh
+graph init --product subgraph-studio
+```
+
+### Authenticate Your Subgraph
+
+In The Graph CLI, use the auth command seen in Subgraph Studio:
+
+```sh
+graph auth
+```
+
+## 2. Deploy Your Subgraph to Studio
+
+If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph.
+
+In The Graph CLI, run the following command:
+
+```sh
+graph deploy --ipfs-hash
+
+```
+
+> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1).
+
+## 3. Publish Your Subgraph to The Graph Network
+
+
+
+### Query Your Subgraph
+
+> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph.
+
+You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio.
+
+#### Example
+
+[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari:
+
+
+
+The query URL for this Subgraph is:
+
+```sh
+https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK
+```
+
+Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint.
+
+### Getting your own API Key
+
+You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page:
+
+
+
+### Monitor Subgraph Status
+
+Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/).
+
+### مصادر إضافية
+
+- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/).
+- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/).
diff --git a/website/src/pages/ar/subgraphs/querying/best-practices.mdx b/website/src/pages/ar/subgraphs/querying/best-practices.mdx
index 23dcd2cb8920..f469ff02de9c 100644
--- a/website/src/pages/ar/subgraphs/querying/best-practices.mdx
+++ b/website/src/pages/ar/subgraphs/querying/best-practices.mdx
@@ -4,7 +4,7 @@ title: أفضل الممارسات للاستعلام
The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language.
-Learn the essential GraphQL language rules and best practices to optimize your subgraph.
+Learn the essential GraphQL language rules and best practices to optimize your Subgraph.
---
@@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi
However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features:
-- التعامل مع ال subgraph عبر السلاسل: الاستعلام من عدة subgraphs عبر استعلام واحد
+- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query
- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- نتيجة مكتوبة بالكامل
@@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set `
### Use a single query to request multiple records
-By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}`
+By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}`
Example of inefficient querying:
diff --git a/website/src/pages/ar/subgraphs/querying/from-an-application.mdx b/website/src/pages/ar/subgraphs/querying/from-an-application.mdx
index 767a2caa9021..08c71fa4ad1f 100644
--- a/website/src/pages/ar/subgraphs/querying/from-an-application.mdx
+++ b/website/src/pages/ar/subgraphs/querying/from-an-application.mdx
@@ -1,5 +1,6 @@
---
title: الاستعلام من التطبيق
+sidebarTitle: Querying from an App
---
Learn how to query The Graph from your application.
@@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d
### Subgraph Studio Endpoint
-After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this:
+After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this:
```
https://api.studio.thegraph.com/query///
@@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query///
### The Graph Network Endpoint
-After publishing your subgraph to the network, you will receive an endpoint that looks like this: :
+After publishing your Subgraph to the network, you will receive an endpoint that looks like this: :
```
https://gateway.thegraph.com/api//subgraphs/id/
```
-> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data.
+> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data.
## Using Popular GraphQL Clients
@@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/
The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as:
-- التعامل مع ال subgraph عبر السلاسل: الاستعلام من عدة subgraphs عبر استعلام واحد
+- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query
- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- نتيجة مكتوبة بالكامل
@@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq
### Fetch Data with Graph Client
-Let's look at how to fetch data from a subgraph with `graph-client`:
+Let's look at how to fetch data from a Subgraph with `graph-client`:
#### Step 1
@@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on
### Fetch Data with Apollo Client
-Let's look at how to fetch data from a subgraph with Apollo client:
+Let's look at how to fetch data from a Subgraph with Apollo client:
#### Step 1
@@ -257,7 +258,7 @@ client
### Fetch data with URQL
-Let's look at how to fetch data from a subgraph with URQL:
+Let's look at how to fetch data from a Subgraph with URQL:
#### Step 1
diff --git a/website/src/pages/ar/subgraphs/querying/graph-client/README.md b/website/src/pages/ar/subgraphs/querying/graph-client/README.md
index 416cadc13c6f..d4850e723c6e 100644
--- a/website/src/pages/ar/subgraphs/querying/graph-client/README.md
+++ b/website/src/pages/ar/subgraphs/querying/graph-client/README.md
@@ -16,19 +16,19 @@ This library is intended to simplify the network aspect of data consumption for
| Status | Feature | Notes |
| :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
-| ✅ | Multiple indexers | based on fetch strategies |
-| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue |
-| ✅ | Build time validations & optimizations | |
-| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) |
-| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source |
-| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client |
-| ✅ | Local (client-side) Mutations | |
-| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) |
-| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit |
-| ✅ | Integration with `@apollo/client` | |
-| ✅ | Integration with `urql` | |
-| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` |
-| ✅ | [`@live` queries](./live.md) | Based on polling |
+| ✅ | Multiple indexers | based on fetch strategies |
+| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue |
+| ✅ | Build time validations & optimizations | |
+| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) |
+| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source |
+| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client |
+| ✅ | Local (client-side) Mutations | |
+| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) |
+| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit |
+| ✅ | Integration with `@apollo/client` | |
+| ✅ | Integration with `urql` | |
+| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` |
+| ✅ | [`@live` queries](./live.md) | Based on polling |
> You can find an [extended architecture design here](./architecture.md)
@@ -308,8 +308,8 @@ sources:
`highestValue`
-
- This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated.
+
+This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated.
This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources.
diff --git a/website/src/pages/ar/subgraphs/querying/graphql-api.mdx b/website/src/pages/ar/subgraphs/querying/graphql-api.mdx
index d73381f88a7d..801e95fa66de 100644
--- a/website/src/pages/ar/subgraphs/querying/graphql-api.mdx
+++ b/website/src/pages/ar/subgraphs/querying/graphql-api.mdx
@@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph.
## What is GraphQL?
-[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs.
+[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
-To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/).
+To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/).
## Queries with GraphQL
-In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type.
+In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type.
> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph.
@@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison:
You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`.
-This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block).
+This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block).
```graphql
{
@@ -329,18 +329,18 @@ This query will return `Challenge` entities, and their associated `Application`
### Fulltext Search Queries
-Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph.
+Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph.
Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field.
Fulltext search operators:
-| رمز | عامل التشغيل | الوصف |
-| --- | --- | --- |
-| `&` | `And` | لدمج عبارات بحث متعددة في فلتر للكيانات التي تتضمن جميع العبارات المتوفرة |
-| | | `Or` | الاستعلامات التي تحتوي على عبارات بحث متعددة مفصولة بواسطة عامل التشغيل or ستعيد جميع الكيانات المتطابقة من أي عبارة متوفرة |
-| `<->` | `Follow by` | يحدد المسافة بين كلمتين. |
-| `:*` | `Prefix` | يستخدم عبارة البحث prefix للعثور على الكلمات التي تتطابق بادئتها (مطلوب حرفان.) |
+| رمز | عامل التشغيل | الوصف |
+| ------ | ------------ | --------------------------------------------------------------------------------------------------------------------------- |
+| `&` | `And` | لدمج عبارات بحث متعددة في فلتر للكيانات التي تتضمن جميع العبارات المتوفرة |
+| | | `Or` | الاستعلامات التي تحتوي على عبارات بحث متعددة مفصولة بواسطة عامل التشغيل or ستعيد جميع الكيانات المتطابقة من أي عبارة متوفرة |
+| `<->` | `Follow by` | يحدد المسافة بين كلمتين. |
+| `:*` | `Prefix` | يستخدم عبارة البحث prefix للعثور على الكلمات التي تتطابق بادئتها (مطلوب حرفان.) |
#### Examples
@@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021
The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System).
-GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph).
+GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph).
> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications.
@@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en
### Subgraph Metadata
-All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows:
+All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows:
```graphQL
{
@@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s
}
```
-If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the subgraph's start block, and less than or equal to the most recently indexed block.
+If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block.
`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file.
@@ -427,6 +427,6 @@ If a block is provided, the metadata is as of that block, if not the latest inde
- hash: the hash of the block
- number: the block number
-- timestamp: the timestamp of the block, if available (this is currently only available for subgraphs indexing EVM networks)
+- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks)
-`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block
+`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block
diff --git a/website/src/pages/ar/subgraphs/querying/introduction.mdx b/website/src/pages/ar/subgraphs/querying/introduction.mdx
index 281957e11e14..bdd0bde88865 100644
--- a/website/src/pages/ar/subgraphs/querying/introduction.mdx
+++ b/website/src/pages/ar/subgraphs/querying/introduction.mdx
@@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex
## نظره عامة
-When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph.
+When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph.
## Specifics
-Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner.
+Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner.

@@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an
Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/).
-> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities.
+> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities.
>
> Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead.
diff --git a/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx
index 33e9d7b78fc2..7b91a147ef47 100644
--- a/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx
+++ b/website/src/pages/ar/subgraphs/querying/managing-api-keys.mdx
@@ -4,11 +4,11 @@ title: Managing API keys
## نظره عامة
-API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application.
+API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application.
### Create and Manage API Keys
-Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs.
+Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs.
The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
@@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page:
- كمية GRT التي تم صرفها
2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can:
- عرض وإدارة أسماء النطاقات المصرح لها باستخدام مفتاح API الخاص بك
- - تعيين الـ subgraphs التي يمكن الاستعلام عنها باستخدام مفتاح API الخاص بك
+ - Assign Subgraphs that can be queried with your API key
diff --git a/website/src/pages/ar/subgraphs/querying/python.mdx b/website/src/pages/ar/subgraphs/querying/python.mdx
index 0937e4f7862d..ed0d078a4175 100644
--- a/website/src/pages/ar/subgraphs/querying/python.mdx
+++ b/website/src/pages/ar/subgraphs/querying/python.mdx
@@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds
sidebarTitle: Python (Subgrounds)
---
-Subgrounds is an intuitive Python library for querying subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis!
+Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis!
Subgrounds offers a simple Pythonic API for building GraphQL queries, automates tedious workflows such as pagination, and empowers advanced users through controlled schema transformations.
@@ -17,14 +17,14 @@ pip install --upgrade subgrounds
python -m pip install --upgrade subgrounds
```
-Once installed, you can test out subgrounds with the following query. The following example grabs a subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame).
+Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame).
```python
from subgrounds import Subgrounds
sg = Subgrounds()
-# Load the subgraph
+# Load the Subgraph
aave_v2 = sg.load_subgraph(
"https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum")
diff --git a/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
index 103e470e14da..17258dd13ea1 100644
--- a/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
+++ b/website/src/pages/ar/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
@@ -2,17 +2,17 @@
title: Subgraph ID vs Deployment ID
---
-A subgraph is identified by a Subgraph ID, and each version of the subgraph is identified by a Deployment ID.
+A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID.
-When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph.
+When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph.
Here are some key differences between the two IDs: 
## Deployment ID
-The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
+The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
-When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published.
+When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published.
Example endpoint that uses Deployment ID:
@@ -20,8 +20,8 @@ Example endpoint that uses Deployment ID:
## Subgraph ID
-The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats.
+The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats.
-Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes.
+Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes.
Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW`
diff --git a/website/src/pages/ar/subgraphs/quick-start.mdx b/website/src/pages/ar/subgraphs/quick-start.mdx
index 42f4acf08df9..9b7bf860e87d 100644
--- a/website/src/pages/ar/subgraphs/quick-start.mdx
+++ b/website/src/pages/ar/subgraphs/quick-start.mdx
@@ -2,7 +2,7 @@
title: بداية سريعة
---
-Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph.
+Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph.
## Prerequisites
@@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/
## How to Build a Subgraph
-### 1. Create a subgraph in Subgraph Studio
+### 1. Create a Subgraph in Subgraph Studio
Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet.
-Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys.
+Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys.
-Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name".
+Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name".
### 2. Install the Graph CLI
@@ -37,13 +37,13 @@ Using [yarn](https://yarnpkg.com/):
yarn global add @graphprotocol/graph-cli
```
-### 3. Initialize your subgraph
+### 3. Initialize your Subgraph
-> يمكنك العثور على الأوامر المتعلقة بالغراف الفرعي الخاص بك على صفحة الغراف الفرعي في (سبغراف استوديو) (https://thegraph.com/studio).
+> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/).
-The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events.
+The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events.
-The following command initializes your subgraph from an existing contract:
+The following command initializes your Subgraph from an existing contract:
```sh
graph init
@@ -51,42 +51,42 @@ graph init
If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI.
-When you initialize your subgraph, the CLI will ask you for the following information:
+When you initialize your Subgraph, the CLI will ask you for the following information:
-- **Protocol**: Choose the protocol your subgraph will be indexing data from.
-- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph.
-- **Directory**: Choose a directory to create your subgraph in.
-- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from.
+- **Protocol**: Choose the protocol your Subgraph will be indexing data from.
+- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph.
+- **Directory**: Choose a directory to create your Subgraph in.
+- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from.
- **Contract address**: Locate the smart contract address you’d like to query data from.
- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file.
-- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed.
+- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed.
- **Contract Name**: Input the name of your contract.
-- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event.
+- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event.
- **Add another contract** (optional): You can add another contract.
-يرجى مراجعة الصورة المرفقة كمثال عن ما يمكن توقعه عند تهيئة غرافك الفرعي:
+See the following screenshot for an example for what to expect when initializing your Subgraph:

-### 4. Edit your subgraph
+### 4. Edit your Subgraph
-The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph.
+The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph.
-When making changes to the subgraph, you will mainly work with three files:
+When making changes to the Subgraph, you will mainly work with three files:
-- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index.
-- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph.
+- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index.
+- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph.
- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema.
-For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/).
+For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/).
-### 5. Deploy your subgraph
+### 5. Deploy your Subgraph
> Remember, deploying is not the same as publishing.
-When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
+When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
-عند كتابة غرافك الفرعي، قم بتنفيذ الأوامر التالية:
+Once your Subgraph is written, run the following commands:
````
```sh
@@ -94,7 +94,7 @@ graph codegen && graph build
```
````
-Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio.
+Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio.

@@ -109,37 +109,37 @@ graph deploy
The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`.
-### 6. Review your subgraph
+### 6. Review your Subgraph
-If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following:
+If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following:
- Run a sample query.
-- Analyze your subgraph in the dashboard to check information.
-- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this:
+- Analyze your Subgraph in the dashboard to check information.
+- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this:

-### 7. Publish your subgraph to The Graph Network
+### 7. Publish your Subgraph to The Graph Network
-When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following:
+When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following:
-- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network.
-- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/).
-- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it.
+- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network.
+- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/).
+- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it.
-> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph.
+> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph.
#### Publishing with Subgraph Studio
-To publish your subgraph, click the Publish button in the dashboard.
+To publish your Subgraph, click the Publish button in the dashboard.
-
+
-Select the network to which you would like to publish your subgraph.
+Select the network to which you would like to publish your Subgraph.
#### Publishing from the CLI
-As of version 0.73.0, you can also publish your subgraph with the Graph CLI.
+As of version 0.73.0, you can also publish your Subgraph with the Graph CLI.
Open the `graph-cli`.
@@ -157,32 +157,32 @@ graph publish
```
````
-3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice.
+3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice.

To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
-#### Adding signal to your subgraph
+#### Adding signal to your Subgraph
-1. To attract Indexers to query your subgraph, you should add GRT curation signal to it.
+1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it.
- - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph.
+ - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph.
2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount.
- - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks.
+ - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks.
To learn more about curation, read [Curating](/resources/roles/curating/).
-To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option:
+To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option:

-### 8. Query your subgraph
+### 8. Query your Subgraph
-You now have access to 100,000 free queries per month with your subgraph on The Graph Network!
+You now have access to 100,000 free queries per month with your Subgraph on The Graph Network!
-You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button.
+You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button.
-For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/).
+For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/).
diff --git a/website/src/pages/ar/substreams/developing/dev-container.mdx b/website/src/pages/ar/substreams/developing/dev-container.mdx
index bd4acf16eec7..339ddb159c87 100644
--- a/website/src/pages/ar/substreams/developing/dev-container.mdx
+++ b/website/src/pages/ar/substreams/developing/dev-container.mdx
@@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container.
It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file).
-Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling.
+Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling.
## Prerequisites
@@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea
You can configure your project to query data either through a Subgraph or directly from an SQL database:
-- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph).
+- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph).
- **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink).
## Deployment Options
diff --git a/website/src/pages/ar/substreams/developing/sinks.mdx b/website/src/pages/ar/substreams/developing/sinks.mdx
index 8a3a2eda4ff0..40ca8a67080f 100644
--- a/website/src/pages/ar/substreams/developing/sinks.mdx
+++ b/website/src/pages/ar/substreams/developing/sinks.mdx
@@ -1,5 +1,5 @@
---
-title: Official Sinks
+title: Sink your Substreams
---
Choose a sink that meets your project's needs.
@@ -8,7 +8,7 @@ Choose a sink that meets your project's needs.
Once you find a package that fits your needs, you can choose how you want to consume the data.
-Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph.
+Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph.
## Sinks
@@ -26,26 +26,26 @@ Sinks are integrations that allow you to send the extracted data to different de
### Official
-| Name | Support | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
-| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
-| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
-| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
-| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
-| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
-| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
-| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
+| Name | Support | Maintainer | Source Code |
+| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- |
+| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
+| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
+| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
+| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
+| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
+| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
+| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
### Community
-| Name | Support | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
-| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
-| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Name | Support | Maintainer | Source Code |
+| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- |
+| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
+| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
+| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
- O = Official Support (by one of the main Substreams providers)
- C = Community Support
diff --git a/website/src/pages/ar/substreams/developing/solana/account-changes.mdx b/website/src/pages/ar/substreams/developing/solana/account-changes.mdx
index 3e13301b042c..704443dee771 100644
--- a/website/src/pages/ar/substreams/developing/solana/account-changes.mdx
+++ b/website/src/pages/ar/substreams/developing/solana/account-changes.mdx
@@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu
> NOTE: History for the Solana Account Changes dates as of 2025, block 310629601.
-For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes).
+For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes).
> NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`.
diff --git a/website/src/pages/ar/substreams/developing/solana/transactions.mdx b/website/src/pages/ar/substreams/developing/solana/transactions.mdx
index b1b97cdcbfe5..ebdeeb98a931 100644
--- a/website/src/pages/ar/substreams/developing/solana/transactions.mdx
+++ b/website/src/pages/ar/substreams/developing/solana/transactions.mdx
@@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi
## Step 3: Load the Data
-To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink.
+To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink.
### Subgraph
1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions.
-2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`.
+2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`.
3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`.
### SQL
diff --git a/website/src/pages/ar/substreams/introduction.mdx b/website/src/pages/ar/substreams/introduction.mdx
index 774c2dfb90c2..ffb3f46baa62 100644
--- a/website/src/pages/ar/substreams/introduction.mdx
+++ b/website/src/pages/ar/substreams/introduction.mdx
@@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh
## Substreams Benefits
-- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing.
+- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing.
- **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara.
- **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections.
- **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database.
diff --git a/website/src/pages/ar/substreams/publishing.mdx b/website/src/pages/ar/substreams/publishing.mdx
index 0d3b7933820e..8ee05b0eda53 100644
--- a/website/src/pages/ar/substreams/publishing.mdx
+++ b/website/src/pages/ar/substreams/publishing.mdx
@@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s
### What is a package?
-A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs.
+A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs.
## Publish a Package
@@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data

-That's it! You have succesfully published a package in the Substreams registry.
+That's it! You have successfully published a package in the Substreams registry.

diff --git a/website/src/pages/ar/supported-networks.mdx b/website/src/pages/ar/supported-networks.mdx
index 559f4bc25d5e..ac7050638264 100644
--- a/website/src/pages/ar/supported-networks.mdx
+++ b/website/src/pages/ar/supported-networks.mdx
@@ -1,22 +1,28 @@
---
title: الشبكات المدعومة
hideTableOfContents: true
+hideContentHeader: true
---
-import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps'
-import { SupportedNetworksTable } from '@/supportedNetworks'
+import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks'
+import { Heading } from '@/components'
+import { useI18n } from '@/i18n'
-export const getStaticProps = getStaticPropsForSupportedNetworks(__filename)
+export const getStaticProps = getSupportedNetworksStaticProps
+
+
+ {useI18n().t('index.supportedNetworks.title')}
+
- Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints.
- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier.
-- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks.
+- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks.
- For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md).
## Running Graph Node locally
If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration.
-Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support.
+Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support.
diff --git a/website/src/pages/ar/token-api/_meta-titles.json b/website/src/pages/ar/token-api/_meta-titles.json
new file mode 100644
index 000000000000..7ed31e0af95d
--- /dev/null
+++ b/website/src/pages/ar/token-api/_meta-titles.json
@@ -0,0 +1,6 @@
+{
+ "mcp": "MCP",
+ "evm": "EVM Endpoints",
+ "monitoring": "Monitoring Endpoints",
+ "faq": "FAQ"
+}
diff --git a/website/src/pages/ar/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/ar/token-api/evm/get-balances-evm-by-address.mdx
new file mode 100644
index 000000000000..3386fd078059
--- /dev/null
+++ b/website/src/pages/ar/token-api/evm/get-balances-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: Token Balances by Wallet Address
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getBalancesEvmByAddress
+---
+
+The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain.
diff --git a/website/src/pages/ar/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/ar/token-api/evm/get-holders-evm-by-contract.mdx
new file mode 100644
index 000000000000..0bb79e41ed54
--- /dev/null
+++ b/website/src/pages/ar/token-api/evm/get-holders-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: Token Holders by Contract Address
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getHoldersEvmByContract
+---
+
+The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract.
diff --git a/website/src/pages/ar/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/ar/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
new file mode 100644
index 000000000000..d1558ddd6e78
--- /dev/null
+++ b/website/src/pages/ar/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: Token OHLCV prices by Contract Address
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getOhlcPricesEvmByContract
+---
+
+The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/ar/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/ar/token-api/evm/get-tokens-evm-by-contract.mdx
new file mode 100644
index 000000000000..b6fab8011fc2
--- /dev/null
+++ b/website/src/pages/ar/token-api/evm/get-tokens-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: Token Holders and Supply by Contract Address
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getTokensEvmByContract
+---
+
+The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more.
diff --git a/website/src/pages/ar/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/ar/token-api/evm/get-transfers-evm-by-address.mdx
new file mode 100644
index 000000000000..604c185588ea
--- /dev/null
+++ b/website/src/pages/ar/token-api/evm/get-transfers-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: Token Transfers by Wallet Address
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getTransfersEvmByAddress
+---
+
+The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time.
diff --git a/website/src/pages/ar/token-api/faq.mdx b/website/src/pages/ar/token-api/faq.mdx
new file mode 100644
index 000000000000..8c1032894ddb
--- /dev/null
+++ b/website/src/pages/ar/token-api/faq.mdx
@@ -0,0 +1,109 @@
+---
+title: Token API FAQ
+---
+
+Get fast answers to easily integrate and scale with The Graph's high-performance Token API.
+
+## عام
+
+### What blockchains does the Token API support?
+
+Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One.
+
+### Why isn't my API key from The Graph Market working?
+
+Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key.
+
+### How current is the data provided by the API relative to the blockchain?
+
+The API provides data up to the latest finalized block.
+
+### How do I authenticate requests to the Token API?
+
+Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
+
+### Does the Token API provide a client SDK?
+
+While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol).
+
+### Are there plans to support additional blockchains in the future?
+
+Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol).
+
+### Are there plans to offer data closer to the chain head?
+
+Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol).
+
+### Are there plans to support additional use cases such as NFTs?
+
+The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
+
+## MCP / LLM / AI Topics
+
+### Is there a time limit for LLM queries?
+
+Yes. The maximum time limit for LLM queries is 10 seconds.
+
+### Is there a known list of LLMs that work with the API?
+
+Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server.
+
+Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter).
+
+### Where can I find the MCP client?
+
+You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client).
+
+## Advanced Topics
+
+### I'm getting 403/401 errors. What's wrong?
+
+Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market.
+
+### Are there rate limits or usage costs?\*\*
+
+During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+
+### What networks are supported, and how do I specify them?
+
+You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
+
+### Why do I only see 10 results? How can I get more data?
+
+Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
+
+### How do I fetch older transfer history?
+
+The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call.
+
+### What does an empty `"data": []` array mean?
+
+An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error.
+
+### Why is the JSON response wrapped in a `"data"` array?
+
+All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`).
+
+### Why are token amounts returned as strings?
+
+Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values.
+
+### What format should addresses be in?
+
+The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address.
+
+### Do I need special headers besides authentication?
+
+While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`).
+
+### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this?
+
+For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`.
+
+### Is the Token API part of The Graph's GraphQL service?
+
+No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints.
+
+### Do I need to use MCP or tools like Claude, Cline, or Cursor?
+
+No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required.
diff --git a/website/src/pages/ar/token-api/mcp/claude.mdx b/website/src/pages/ar/token-api/mcp/claude.mdx
new file mode 100644
index 000000000000..12a036b6fc24
--- /dev/null
+++ b/website/src/pages/ar/token-api/mcp/claude.mdx
@@ -0,0 +1,58 @@
+---
+title: Using Claude Desktop to Access the Token API via MCP
+sidebarTitle: Claude Desktop
+---
+
+## Prerequisites
+
+- [Claude Desktop](https://claude.ai/download) installed.
+- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/).
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path.
+- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version.
+
+
+
+## Configuration
+
+Create or edit your `claude_desktop_config.json` file.
+
+> **Settings** > **Developer** > **Edit Config**
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+```json label="claude_desktop_config.json"
+{
+ "mcpServers": {
+ "token-api": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": ""
+ }
+ }
+ }
+}
+```
+
+## Troubleshooting
+
+To enable logs for the MCP, use the `--verbose true` option.
+
+### ENOENT
+
+
+
+Try to use the full path of the command instead:
+
+- Run `which npx` or `which bunx` to get the path of the command.
+- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`).
+
+### Server disconnected
+
+
+
+Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable.
+
+> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details.
diff --git a/website/src/pages/ar/token-api/mcp/cline.mdx b/website/src/pages/ar/token-api/mcp/cline.mdx
new file mode 100644
index 000000000000..ef98e45939fe
--- /dev/null
+++ b/website/src/pages/ar/token-api/mcp/cline.mdx
@@ -0,0 +1,52 @@
+---
+title: Using Cline to Access the Token API via MCP
+sidebarTitle: Cline
+---
+
+## Prerequisites
+
+- [Cline](https://cline.bot/) installed.
+- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/).
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path.
+- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version.
+
+
+
+## Configuration
+
+Create or edit your `cline_mcp_settings.json` file.
+
+> **MCP Servers** > **Installed** > **Configure MCP Servers**
+
+```json label="cline_mcp_settings.json"
+{
+ "mcpServers": {
+ "mcp-pinax": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": ""
+ }
+ }
+ }
+}
+```
+
+## Troubleshooting
+
+To enable logs for the MCP, use the `--verbose true` option.
+
+### ENOENT
+
+
+
+Try to use the full path of the command instead:
+
+- Run `which npx` or `which bunx` to get the path of the command.
+- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`).
+
+### Server disconnected
+
+
+
+Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable.
diff --git a/website/src/pages/ar/token-api/mcp/cursor.mdx b/website/src/pages/ar/token-api/mcp/cursor.mdx
new file mode 100644
index 000000000000..658108d1337b
--- /dev/null
+++ b/website/src/pages/ar/token-api/mcp/cursor.mdx
@@ -0,0 +1,50 @@
+---
+title: Using Cursor to Access the Token API via MCP
+sidebarTitle: Cursor
+---
+
+## Prerequisites
+
+- [Cursor](https://www.cursor.com/) installed.
+- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/).
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path.
+- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version.
+
+
+
+## Configuration
+
+Create or edit your `~/.cursor/mcp.json` file.
+
+> **Cursor Settings** > **MCP** > **Add new global MCP Server**
+
+```json label="mcp.json"
+{
+ "mcpServers": {
+ "mcp-pinax": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": ""
+ }
+ }
+ }
+}
+```
+
+## Troubleshooting
+
+
+
+To enable logs for the MCP, use the `--verbose true` option.
+
+### ENOENT
+
+Try to use the full path of the command instead:
+
+- Run `which npx` or `which bunx` to get the path of the command.
+- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`).
+
+### Server disconnected
+
+Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable.
diff --git a/website/src/pages/ar/token-api/monitoring/get-health.mdx b/website/src/pages/ar/token-api/monitoring/get-health.mdx
new file mode 100644
index 000000000000..57a827b3343b
--- /dev/null
+++ b/website/src/pages/ar/token-api/monitoring/get-health.mdx
@@ -0,0 +1,7 @@
+---
+title: Get health status of the API
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getHealth
+---
diff --git a/website/src/pages/ar/token-api/monitoring/get-networks.mdx b/website/src/pages/ar/token-api/monitoring/get-networks.mdx
new file mode 100644
index 000000000000..0ea3c485ddb9
--- /dev/null
+++ b/website/src/pages/ar/token-api/monitoring/get-networks.mdx
@@ -0,0 +1,7 @@
+---
+title: Get supported networks of the API
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNetworks
+---
diff --git a/website/src/pages/ar/token-api/monitoring/get-version.mdx b/website/src/pages/ar/token-api/monitoring/get-version.mdx
new file mode 100644
index 000000000000..0be6b7e92d04
--- /dev/null
+++ b/website/src/pages/ar/token-api/monitoring/get-version.mdx
@@ -0,0 +1,7 @@
+---
+title: Get the version of the API
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getVersion
+---
diff --git a/website/src/pages/ar/token-api/quick-start.mdx b/website/src/pages/ar/token-api/quick-start.mdx
new file mode 100644
index 000000000000..c5fa07fa9371
--- /dev/null
+++ b/website/src/pages/ar/token-api/quick-start.mdx
@@ -0,0 +1,79 @@
+---
+title: Token API Quick Start
+sidebarTitle: بداية سريعة
+---
+
+
+
+> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol).
+
+The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application.
+
+The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude.
+
+## Prerequisites
+
+Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu.
+
+## Authentication
+
+All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `.
+
+```json
+{
+ "headers": {
+ "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA"
+ }
+}
+```
+
+## Using JavaScript
+
+Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example:
+
+```js label="index.js"
+const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208'
+const options = {
+ method: 'GET',
+ headers: {
+ Accept: 'application/json',
+ Authorization: 'Bearer ',
+ },
+}
+
+fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options)
+ .then((response) => response.json())
+ .then((response) => console.log(response))
+ .catch((err) => console.error(err))
+```
+
+Make sure to replace `` with the JWT Token generated from your API key.
+
+## Using cURL (Command Line)
+
+To make an API request using **cURL**, open your command line and run the following command.
+
+```curl
+curl --request GET \
+ --url https://token-api.thegraph.com/balances/evm/0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208 \
+ --header 'Accept: application/json' \
+ --header 'Authorization: Bearer '
+```
+
+Make sure to replace `` with the JWT Token generated from your API key.
+
+> Most Unix-like systems come with cURL preinstalled. For Windows, you may need to install cURL.
+
+## Troubleshooting
+
+If the API call fails, try printing out the full response object for additional error details. For example:
+
+```js label="index.js"
+fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options)
+ .then((response) => {
+ console.log('Status Code:', response.status)
+ return response.json()
+ })
+ .then((data) => console.log(data))
+ .catch((err) => console.error('Error:', err))
+```
diff --git a/website/src/pages/cs/about.mdx b/website/src/pages/cs/about.mdx
index 256519660a73..1f43c663437f 100644
--- a/website/src/pages/cs/about.mdx
+++ b/website/src/pages/cs/about.mdx
@@ -30,25 +30,25 @@ Blockchain properties, such as finality, chain reorganizations, and uncled block
## The Graph Provides a Solution
-The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API.
+The Graph solves this challenge with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "Subgraphs") can then be queried with a standard GraphQL API.
Today, there is a decentralized protocol that is backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node) that enables this process.
### How The Graph Functions
-Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
+Indexing blockchain data is very difficult, but The Graph makes it easy. The Graph learns how to index Ethereum data by using Subgraphs. Subgraphs are custom APIs built on blockchain data that extract data from a blockchain, processes it, and stores it so that it can be seamlessly queried via GraphQL.
#### Specifics
-- The Graph uses subgraph descriptions, which are known as the subgraph manifest inside the subgraph.
+- The Graph uses Subgraph descriptions, which are known as the Subgraph manifest inside the Subgraph.
-- The subgraph description outlines the smart contracts of interest for a subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
+- The Subgraph description outlines the smart contracts of interest for a Subgraph, the events within those contracts to focus on, and how to map event data to the data that The Graph will store in its database.
-- When creating a subgraph, you need to write a subgraph manifest.
+- When creating a Subgraph, you need to write a Subgraph manifest.
-- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that subgraph.
+- After writing the `subgraph manifest`, you can use the Graph CLI to store the definition in IPFS and instruct an Indexer to start indexing data for that Subgraph.
-The diagram below provides more detailed information about the flow of data after a subgraph manifest has been deployed with Ethereum transactions.
+The diagram below provides more detailed information about the flow of data after a Subgraph manifest has been deployed with Ethereum transactions.

@@ -56,12 +56,12 @@ Průběh se řídí těmito kroky:
1. Dapp přidává data do Ethereum prostřednictvím transakce na chytrém kontraktu.
2. Chytrý smlouva vysílá při zpracování transakce jednu nebo více událostí.
-3. Uzel grafu neustále vyhledává nové bloky Ethereum a data pro váš podgraf, která mohou obsahovat.
-4. Uzel grafu v těchto blocích vyhledá události Etherea pro váš podgraf a spustí vámi zadané mapovací obsluhy. Mapování je modul WASM, který vytváří nebo aktualizuje datové entity, které Uzel grafu ukládá v reakci na události Ethereum.
+3. Graph Node continually scans Ethereum for new blocks and the data for your Subgraph they may contain.
+4. Graph Node finds Ethereum events for your Subgraph in these blocks and runs the mapping handlers you provided. The mapping is a WASM module that creates or updates the data entities that Graph Node stores in response to Ethereum events.
5. Aplikace dapp se dotazuje grafického uzlu na data indexovaná z blockchainu pomocí [GraphQL endpoint](https://graphql.org/learn/). Uzel Grafu zase překládá dotazy GraphQL na dotazy pro své podkladové datové úložiště, aby tato data načetl, přičemž využívá indexovací schopnosti úložiště. Dapp tato data zobrazuje v bohatém UI pro koncové uživatele, kteří je používají k vydávání nových transakcí na platformě Ethereum. Cyklus se opakuje.
## Další kroky
-The following sections provide a more in-depth look at subgraphs, their deployment and data querying.
+The following sections provide a more in-depth look at Subgraphs, their deployment and data querying.
-Before you write your own subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed subgraphs. Each subgraph's page includes a GraphQL playground, allowing you to query its data.
+Before you write your own Subgraph, it's recommended to explore [Graph Explorer](https://thegraph.com/explorer) and review some of the already deployed Subgraphs. Each Subgraph's page includes a GraphQL playground, allowing you to query its data.
diff --git a/website/src/pages/cs/archived/arbitrum/arbitrum-faq.mdx b/website/src/pages/cs/archived/arbitrum/arbitrum-faq.mdx
index 050d1a0641aa..df47adfff704 100644
--- a/website/src/pages/cs/archived/arbitrum/arbitrum-faq.mdx
+++ b/website/src/pages/cs/archived/arbitrum/arbitrum-faq.mdx
@@ -14,7 +14,7 @@ By scaling The Graph on L2, network participants can now benefit from:
- Zabezpečení zděděné po Ethereum
-Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of subgraphs. Developers can deploy and update subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas.
+Scaling the protocol smart contracts onto L2 allows network participants to interact more frequently at a reduced cost in gas fees. For example, Indexers can open and close allocations more frequently to index a greater number of Subgraphs. Developers can deploy and update Subgraphs more easily, and Delegators can delegate GRT more frequently. Curators can add or remove signal to a larger number of Subgraphs–actions previously considered too cost-prohibitive to perform frequently due to gas.
Komunita Graf se v loňském roce rozhodla pokračovat v Arbitrum po výsledku diskuze [GIP-0031] (https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305).
@@ -39,7 +39,7 @@ Pro využití výhod používání a Graf na L2 použijte rozevírací přepína

-## Jako vývojář podgrafů, Spotřebitel dat, indexer, kurátor, nebo delegátor, co mám nyní udělat?
+## As a Subgraph developer, data consumer, Indexer, Curator, or Delegator, what do I need to do now?
Network participants must move to Arbitrum to continue participating in The Graph Network. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) for additional support.
@@ -51,9 +51,9 @@ Všechny chytré smlouvy byly důkladně [auditovány](https://github.com/graphp
Vše bylo důkladně otestováno, a je připraven pohotovostní plán, který zajistí bezpečný a bezproblémový přechod. Podrobnosti naleznete [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20).
-## Are existing subgraphs on Ethereum working?
+## Are existing Subgraphs on Ethereum working?
-All subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your subgraphs operate seamlessly.
+All Subgraphs are now on Arbitrum. Please refer to [L2 Transfer Tool Guide](/archived/arbitrum/l2-transfer-tools-guide/) to ensure your Subgraphs operate seamlessly.
## Does GRT have a new smart contract deployed on Arbitrum?
diff --git a/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-faq.mdx b/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-faq.mdx
index 88e1d9e632a2..439e83f3864b 100644
--- a/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-faq.mdx
+++ b/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-faq.mdx
@@ -24,9 +24,9 @@ Výjimkou jsou peněženky s chytrými smlouvami, jako je multisigs: jedná se o
Nástroje pro přenos L2 používají k odesílání zpráv z L1 do L2 nativní mechanismus Arbitrum. Tento mechanismus se nazývá 'retryable ticket,' a všechny nativní tokenové můstky, včetně můstku Arbitrum GRT, ho používají. Další informace o opakovatelných ticketch naleznete v části [Arbitrum docs](https://docs.arbitrum.io/arbos/l1-to-l2-messaging).
-Při přenosu aktiv (podgraf, podíl, delegace nebo kurátorství) do L2 se odešle zpráva přes můstek Arbitrum GRT, která vytvoří opakovatelný tiket v L2. Nástroj pro převod zahrnuje v transakci určitou hodnotu ETH, která se použije na 1) zaplacení vytvoření tiketu a 2) zaplacení plynu pro provedení tiketu v L2. Se však ceny plynu mohou v době, než je ticket připraven k provedení v režimu L2, měnit. Je možné, že se tento pokus o automatické provedení nezdaří. Když se tak stane, most Arbitrum udrží opakovatelný tiket naživu až 7 dní a kdokoli se může pokusit o jeho "vykoupení" (což vyžaduje peněženku s určitým množstvím ETH propojenou s mostem Arbitrum).
+When you transfer your assets (Subgraph, stake, delegation or curation) to L2, a message is sent through the Arbitrum GRT bridge which creates a retryable ticket in L2. The transfer tool includes some ETH value in the transaction, that is used to 1) pay to create the ticket and 2) pay for the gas to execute the ticket in L2. However, because gas prices might vary in the time until the ticket is ready to execute in L2, it is possible that this auto-execution attempt fails. When that happens, the Arbitrum bridge will keep the retryable ticket alive for up to 7 days, and anyone can retry “redeeming” the ticket (which requires a wallet with some ETH bridged to Arbitrum).
-Tomuto kroku říkáme 'Potvrzení' ve všech nástrojích pro přenos - ve většině případů se spustí automaticky, protože automatické provedení je většinou úspěšné, ale je důležité, abyste se ujistili, že proběhlo. Pokud se to nepodaří a během 7 dnů nedojde k žádnému úspěšnému opakování, můstek Arbitrum tiket zahodí a vaše aktiva (podgraf, podíl, delegace nebo kurátorství) budou ztracena a nebude možné je obnovit. Vývojáři The Graph jádra mají k dispozici monitorovací systém, který tyto situace odhaluje a snaží se lístky uplatnit dříve, než bude pozdě, ale v konečném důsledku je vaší odpovědností zajistit, aby byl váš přenos dokončen včas. Pokud máte potíže s potvrzením transakce, obraťte se na nás pomocí [tohoto formuláře](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) a hlavní vývojáři vám pomohou.
+This is what we call the “Confirm” step in all the transfer tools - it will run automatically in most cases, as the auto-execution is most often successful, but it is important that you check back to make sure it went through. If it doesn’t succeed and there are no successful retries in 7 days, the Arbitrum bridge will discard the ticket, and your assets (Subgraph, stake, delegation or curation) will be lost and can’t be recovered. The Graph core devs have a monitoring system in place to detect these situations and try to redeem the tickets before it’s too late, but it is ultimately your responsibility to ensure your transfer is completed in time. If you’re having trouble confirming your transaction, please reach out using [this form](https://noteforms.com/forms/notionform-l2-transfer-tooling-issues-0ogqfu?notionforms=1&utm_source=notionforms) and core devs will be there help you.
### Zahájil jsem přenos delegace/podílů/kurátorství a nejsem si jistý, zda se to dostalo do L2. Jak mohu potvrdit, že to bylo přeneseno správně?
@@ -36,43 +36,43 @@ Pokud máte k dispozici hash transakce L1 (který zjistíte, když se podíváte
## Podgraf přenos
-### Jak mohu přenést svůj podgraf?
+### How do I transfer my Subgraph?
-Chcete-li přenést svůj podgraf, musíte provést následující kroky:
+To transfer your Subgraph, you will need to complete the following steps:
1. Zahájení převodu v mainnet Ethereum
2. Počkejte 20 minut na potvrzení
-3. Potvrzení přenosu podgrafů na Arbitrum\*
+3. Confirm Subgraph transfer on Arbitrum\*
-4. Úplné zveřejnění podgrafu na arbitrum
+4. Finish publishing Subgraph on Arbitrum
5. Aktualizovat adresu URL dotazu (doporučeno)
-\*Upozorňujeme, že převod musíte potvrdit do 7 dnů, jinak může dojít ke ztrátě vašeho podgrafu. Ve většině případů se tento krok provede automaticky, ale v případě prudkého nárůstu cen plynu na Arbitru může být nutné ruční potvrzení. Pokud se během tohoto procesu vyskytnou nějaké problémy, budou k dispozici zdroje, které vám pomohou: kontaktujte podporu na adrese support@thegraph.com nebo na [Discord](https://discord.gg/graphprotocol).
+\*Note that you must confirm the transfer within 7 days otherwise your Subgraph may be lost. In most cases, this step will run automatically, but a manual confirmation may be needed if there is a gas price spike on Arbitrum. If there are any issues during this process, there will be resources to help: contact support at support@thegraph.com or on [Discord](https://discord.gg/graphprotocol).
### Odkud mám iniciovat převod?
-Přenos můžete zahájit v [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) nebo na libovolné stránce s detaily subgrafu. "Kliknutím na tlačítko 'Transfer Subgraph' na stránce s podrobnostmi o podgrafu zahájíte přenos.
+You can initiate your transfer from the [Subgraph Studio](https://thegraph.com/studio/), [Explorer,](https://thegraph.com/explorer) or any Subgraph details page. Click the "Transfer Subgraph" button in the Subgraph details page to start the transfer.
-### Jak dlouho musím čekat, než bude můj podgraf přenesen
+### How long do I need to wait until my Subgraph is transferred
Přenos trvá přibližně 20 minut. Most Arbitrum pracuje na pozadí a automaticky dokončí přenos mostu. V některých případech může dojít ke zvýšení nákladů na plyn a transakci bude nutné potvrdit znovu.
-### Bude můj podgraf zjistitelný i poté, co jej přenesu do L2?
+### Will my Subgraph still be discoverable after I transfer it to L2?
-Váš podgraf bude zjistitelný pouze v síti, ve které je publikován. Pokud se například váš subgraf nachází na Arbitrum One, pak jej najdete pouze v Průzkumníku na Arbitrum One a na Ethereum jej nenajdete. Ujistěte se, že máte v přepínači sítí v horní části stránky vybranou možnost Arbitrum One, abyste se ujistili, že jste ve správné síti. Po přenosu se podgraf L1 zobrazí jako zastaralý.
+Your Subgraph will only be discoverable on the network it is published to. For example, if your Subgraph is on Arbitrum One, then you can only find it in Explorer on Arbitrum One and will not be able to find it on Ethereum. Please ensure that you have Arbitrum One selected in the network switcher at the top of the page to ensure you are on the correct network. After the transfer, the L1 Subgraph will appear as deprecated.
-### Musí být můj podgraf zveřejněn, abych ho mohl přenést?
+### Does my Subgraph need to be published to transfer it?
-Abyste mohli využít nástroj pro přenos subgrafů, musí být váš subgraf již zveřejněn v mainnet Ethereum a musí mít nějaký kurátorský signál vlastněný peněženkou, která subgraf vlastní. Pokud váš subgraf není zveřejněn, doporučujeme vám jednoduše publikovat přímo na Arbitrum One - související poplatky za plyn budou podstatně nižší. Pokud chcete přenést publikovaný podgraf, ale účet vlastníka na něm nemá kurátorský signál, můžete z tohoto účtu signalizovat malou částku (např. 1 GRT); nezapomeňte zvolit "auto-migrating" signál.
+To take advantage of the Subgraph transfer tool, your Subgraph must be already published to Ethereum mainnet and must have some curation signal owned by the wallet that owns the Subgraph. If your Subgraph is not published, it is recommended you simply publish directly on Arbitrum One - the associated gas fees will be considerably lower. If you want to transfer a published Subgraph but the owner account hasn't curated any signal on it, you can signal a small amount (e.g. 1 GRT) from that account; make sure to choose "auto-migrating" signal.
-### Co se stane s verzí mého subgrafu na ethereum mainnet po převodu na Arbitrum?
+### What happens to the Ethereum mainnet version of my Subgraph after I transfer to Arbitrum?
-Po převedení vašeho subgrafu na Arbitrum bude verze mainnet Ethereum zastaralá. Doporučujeme vám aktualizovat adresu URL dotazu do 48 hodin. Je však zavedena ochranná lhůta, která udržuje adresu URL mainnet funkční, aby bylo možné aktualizovat podporu dapp třetích stran.
+After transferring your Subgraph to Arbitrum, the Ethereum mainnet version will be deprecated. We recommend you update your query URL within 48 hours. However, there is a grace period in place that keeps your mainnet URL functioning so that any third-party dapp support can be updated.
### Musím po převodu také znovu publikovat na Arbitrum?
@@ -80,21 +80,21 @@ Po uplynutí 20minutového okna pro převod budete muset převod potvrdit transa
### Dojde při opětovném publikování k výpadku mého koncového bodu?
-Je nepravděpodobné, ale je možné, že dojde ke krátkému výpadku v závislosti na tom, které indexátory podporují podgraf na L1 a zda jej indexují, dokud není podgraf plně podporován na L2.
+It is unlikely, but possible to experience a brief downtime depending on which Indexers are supporting the Subgraph on L1 and whether they keep indexing it until the Subgraph is fully supported on L2.
### Je publikování a verzování na L2 stejné jako na mainnet Ethereum Ethereum?
-Ano. Při publikování v aplikaci Subgraph Studio vyberte jako publikovanou síť Arbitrum One. Ve Studiu bude k dispozici nejnovější koncový bod, který odkazuje na nejnovější aktualizovanou verzi podgrafu.
+Yes. Select Arbitrum One as your published network when publishing in Subgraph Studio. In the Studio, the latest endpoint will be available which points to the latest updated version of the Subgraph.
-### Bude se kurátorství mého podgrafu pohybovat spolu s mým podgrafem?
+### Will my Subgraph's curation move with my Subgraph?
-Pokud jste zvolili automatickou migraci signálu, 100 % vaší vlastní kurátorství se přesune spolu s vaším subgrafem do Arbitrum One. Veškerý signál kurátorství podgrafu bude v okamžiku převodu převeden na GRT a GRT odpovídající vašemu signálu kurátorství bude použit k ražbě signálu na podgrafu L2.
+If you've chosen auto-migrating signal, 100% of your own curation will move with your Subgraph to Arbitrum One. All of the Subgraph's curation signal will be converted to GRT at the time of the transfer, and the GRT corresponding to your curation signal will be used to mint signal on the L2 Subgraph.
-Ostatní kurátoři se mohou rozhodnout, zda stáhnou svou část GRT, nebo ji také převedou na L2, aby vyrazili signál na stejném podgraf.
+Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph.
-### Mohu svůj subgraf po převodu přesunout zpět do mainnet Ethereum?
+### Can I move my Subgraph back to Ethereum mainnet after I transfer?
-Po přenosu bude vaše verze tohoto podgrafu v síti Ethereum mainnet zneplatněna. Pokud se chcete přesunout zpět do mainnetu, musíte provést nové nasazení a publikovat zpět do mainnet. Převod zpět do mainnetu Etherea se však důrazně nedoporučuje, protože odměny za indexování budou nakonec distribuovány výhradně na Arbitrum One.
+Once transferred, your Ethereum mainnet version of this Subgraph will be deprecated. If you would like to move back to mainnet, you will need to redeploy and publish back to mainnet. However, transferring back to Ethereum mainnet is strongly discouraged as indexing rewards will eventually be distributed entirely on Arbitrum One.
### Proč potřebuji k dokončení převodu překlenovací ETH?
@@ -206,19 +206,19 @@ Chcete-li přenést své kurátorství, musíte provést následující kroky:
\*Pokud je to nutné - tj. používáte smluvní adresu.
-### Jak se dozvím, že se mnou kurátorovaný podgraf přesunul do L2?
+### How will I know if the Subgraph I curated has moved to L2?
-Při zobrazení stránky s podrobnostmi podgrafu se zobrazí banner s upozorněním, že tento podgraf byl přenesen. Můžete následovat výzvu k přenosu kurátorství. Tyto informace najdete také na stránce s podrobnostmi o podgrafu, který se přesunul.
+When viewing the Subgraph details page, a banner will notify you that this Subgraph has been transferred. You can follow the prompt to transfer your curation. You can also find this information on the Subgraph details page of any Subgraph that has moved.
### Co když si nepřeji přesunout své kurátorství do L2?
-Pokud je podgraf vyřazen, máte možnost stáhnout svůj signál. Stejně tak pokud se podgraf přesunul do L2, můžete si vybrat, zda chcete stáhnout svůj signál v mainnet Ethereum, nebo signál poslat do L2.
+When a Subgraph is deprecated you have the option to withdraw your signal. Similarly, if a Subgraph has moved to L2, you can choose to withdraw your signal in Ethereum mainnet or send the signal to L2.
### Jak poznám, že se moje kurátorství úspěšně přeneslo?
Podrobnosti o signálu budou k dispozici prostřednictvím Průzkumníka přibližně 20 minut po spuštění nástroje pro přenos L2.
-### Mohu přenést své kurátorství na více než jeden podgraf najednou?
+### Can I transfer my curation on more than one Subgraph at a time?
V současné době není k dispozici možnost hromadného přenosu.
@@ -266,7 +266,7 @@ Nástroj pro převod L2 dokončí převod vašeho podílu přibližně za 20 min
### Musím před převodem svého podílu indexovat na Arbitrum?
-Před nastavením indexování můžete nejprve efektivně převést svůj podíl, ale nebudete si moci nárokovat žádné odměny na L2, dokud nepřidělíte podgrafy na L2, neindexujete je a nepředložíte POIs.
+You can effectively transfer your stake first before setting up indexing, but you will not be able to claim any rewards on L2 until you allocate to Subgraphs on L2, index them, and present POIs.
### Mohou delegáti přesunout svou delegaci dříve, než přesunu svůj indexovací podíl?
diff --git a/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-guide.mdx b/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-guide.mdx
index 69717e46ed39..94b78981db6b 100644
--- a/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-guide.mdx
+++ b/website/src/pages/cs/archived/arbitrum/l2-transfer-tools-guide.mdx
@@ -6,53 +6,53 @@ Graph usnadnil přechod na úroveň L2 v Arbitrum One. Pro každého účastník
Some frequent questions about these tools are answered in the [L2 Transfer Tools FAQ](/archived/arbitrum/l2-transfer-tools-faq/). The FAQs contain in-depth explanations of how to use the tools, how they work, and things to keep in mind when using them.
-## Jak přenést podgraf do Arbitrum (L2)
+## How to transfer your Subgraph to Arbitrum (L2)
-## Výhody přenosu podgrafů
+## Benefits of transferring your Subgraphs
Komunita a hlavní vývojáři Graphu se v uplynulém roce [připravovali](https://forum.thegraph.com/t/gip-0031-arbitrum-grt-bridge/3305) na přechod na Arbitrum. Arbitrum, blockchain druhé vrstvy neboli "L2", zdědil bezpečnost po Ethereum, ale poskytuje výrazně nižší poplatky za plyn.
-Když publikujete nebo aktualizujete svůj subgraf v síti The Graph Network, komunikujete s chytrými smlouvami na protokolu, což vyžaduje platbu za plyn pomocí ETH. Přesunutím subgrafů do Arbitrum budou veškeré budoucí aktualizace subgrafů vyžadovat mnohem nižší poplatky za plyn. Nižší poplatky a skutečnost, že křivky vazby kurátorů na L2 jsou ploché, také usnadňují ostatním kurátorům kurátorství na vašem podgrafu, což zvyšuje odměny pro indexátory na vašem podgrafu. Toto prostředí s nižšími náklady také zlevňuje indexování a obsluhu subgrafu pro indexátory. Odměny za indexování se budou v následujících měsících na Arbitrum zvyšovat a na mainnetu Ethereum snižovat, takže stále více indexerů bude převádět své podíly a zakládat své operace na L2.
+When you publish or upgrade your Subgraph to The Graph Network, you're interacting with smart contracts on the protocol and this requires paying for gas using ETH. By moving your Subgraphs to Arbitrum, any future updates to your Subgraph will require much lower gas fees. The lower fees, and the fact that curation bonding curves on L2 are flat, also make it easier for other Curators to curate on your Subgraph, increasing the rewards for Indexers on your Subgraph. This lower-cost environment also makes it cheaper for Indexers to index and serve your Subgraph. Indexing rewards will be increasing on Arbitrum and decreasing on Ethereum mainnet over the coming months, so more and more Indexers will be transferring their stake and setting up their operations on L2.
-## Porozumění tomu, co se děje se signálem, podgrafem L1 a adresami URL dotazů
+## Understanding what happens with signal, your L1 Subgraph and query URLs
-Při přenosu podgrafu do Arbitrum se používá můstek Arbitrum GRT, který zase používá nativní můstek Arbitrum k odeslání podgrafu do L2. Při "přenosu" se subgraf v mainnetu znehodnotí a odešlou se informace pro opětovné vytvoření subgrafu v L2 pomocí mostu. Zahrnuje také GRT vlastníka podgrafu, který již byl signalizován a který musí být větší než nula, aby most převod přijal.
+Transferring a Subgraph to Arbitrum uses the Arbitrum GRT bridge, which in turn uses the native Arbitrum bridge, to send the Subgraph to L2. The "transfer" will deprecate the Subgraph on mainnet and send the information to re-create the Subgraph on L2 using the bridge. It will also include the Subgraph owner's signaled GRT, which must be more than zero for the bridge to accept the transfer.
-Pokud zvolíte převod podgrafu, převede se veškerý signál kurátoru podgrafu na GRT. To je ekvivalentní "znehodnocení" podgrafu v síti mainnet. GRT odpovídající vašemu kurátorství budou spolu s podgrafem odeslány na L2, kde budou vaším jménem použity k ražbě signálu.
+When you choose to transfer the Subgraph, this will convert all of the Subgraph's curation signal to GRT. This is equivalent to "deprecating" the Subgraph on mainnet. The GRT corresponding to your curation will be sent to L2 together with the Subgraph, where they will be used to mint signal on your behalf.
-Ostatní kurátoři se mohou rozhodnout, zda si stáhnou svůj podíl GRT, nebo jej také převedou na L2, aby na stejném podgrafu vyrazili signál. Pokud vlastník podgrafu nepřevede svůj podgraf na L2 a ručně jej znehodnotí prostřednictvím volání smlouvy, pak budou Kurátoři upozorněni a budou moci stáhnout svou kurátorskou funkci.
+Other Curators can choose whether to withdraw their fraction of GRT, or also transfer it to L2 to mint signal on the same Subgraph. If a Subgraph owner does not transfer their Subgraph to L2 and manually deprecates it via a contract call, then Curators will be notified and will be able to withdraw their curation.
-Jakmile je podgraf převeden, protože veškerá kurátorská činnost je převedena na GRT, indexátoři již nebudou dostávat odměny za indexování podgrafu. Budou však existovat indexátory, které 1) budou obsluhovat převedené podgrafy po dobu 24 hodin a 2) okamžitě začnou indexovat podgraf na L2. Protože tyto Indexery již mají podgraf zaindexovaný, nemělo by být nutné čekat na synchronizaci podgrafu a bude možné se na podgraf na L2 dotazovat téměř okamžitě.
+As soon as the Subgraph is transferred, since all curation is converted to GRT, Indexers will no longer receive rewards for indexing the Subgraph. However, there will be Indexers that will 1) keep serving transferred Subgraphs for 24 hours, and 2) immediately start indexing the Subgraph on L2. Since these Indexers already have the Subgraph indexed, there should be no need to wait for the Subgraph to sync, and it will be possible to query the L2 Subgraph almost immediately.
-Dotazy do podgrafu L2 bude nutné zadávat na jinou adresu URL (na `arbitrum-gateway.thegraph.com`), ale adresa URL L1 bude fungovat nejméně 48 hodin. Poté bude brána L1 přeposílat dotazy na bránu L2 (po určitou dobu), což však zvýší latenci, takže se doporučuje co nejdříve přepnout všechny dotazy na novou adresu URL.
+Queries to the L2 Subgraph will need to be done to a different URL (on `arbitrum-gateway.thegraph.com`), but the L1 URL will continue working for at least 48 hours. After that, the L1 gateway will forward queries to the L2 gateway (for some time), but this will add latency so it is recommended to switch all your queries to the new URL as soon as possible.
## Výběr peněženky L2
-Když jste publikovali svůj podgraf na hlavní síti (mainnet), použili jste připojenou peněženku, která vlastní NFT reprezentující tento podgraf a umožňuje vám publikovat aktualizace.
+When you published your Subgraph on mainnet, you used a connected wallet to create the Subgraph, and this wallet owns the NFT that represents this Subgraph and allows you to publish updates.
-Při přenosu podgrafu do Arbitrum si můžete vybrat jinou peněženku, která bude vlastnit tento podgraf NFT na L2.
+When transferring the Subgraph to Arbitrum, you can choose a different wallet that will own this Subgraph NFT on L2.
Pokud používáte "obyčejnou" peněženku, jako je MetaMask (externě vlastněný účet nebo EOA, tj. peněženka, která není chytrým kontraktem), pak je to volitelné a doporučuje se zachovat stejnou adresu vlastníka jako v L1.
-Pokud používáte peněženku s chytrým kontraktem, jako je multisig (např. Trezor), pak je nutné zvolit jinou adresu peněženky L2, protože je pravděpodobné, že tento účet existuje pouze v mainnetu a nebudete moci provádět transakce na Arbitrum pomocí této peněženky. Pokud chcete i nadále používat peněženku s chytrým kontraktem nebo multisig, vytvořte si na Arbitrum novou peněženku a její adresu použijte jako vlastníka L2 svého subgrafu.
+If you're using a smart contract wallet, like a multisig (e.g. a Safe), then choosing a different L2 wallet address is mandatory, as it is most likely that this account only exists on mainnet and you will not be able to make transactions on Arbitrum using this wallet. If you want to keep using a smart contract wallet or multisig, create a new wallet on Arbitrum and use its address as the L2 owner of your Subgraph.
-**Je velmi důležité používat adresu peněženky, kterou máte pod kontrolou a která může provádět transakce na Arbitrum. V opačném případě bude podgraf ztracen a nebude možné jej obnovit.**
+**It is very important to use a wallet address that you control, and that can make transactions on Arbitrum. Otherwise, the Subgraph will be lost and cannot be recovered.**
## Příprava na převod: přemostění některých ETH
-Přenos podgrafu zahrnuje odeslání transakce přes můstek a následné provedení další transakce na Arbitrum. První transakce využívá ETH na mainnetu a obsahuje nějaké ETH na zaplacení plynu, když je zpráva přijata na L2. Pokud však tento plyn nestačí, je třeba transakci zopakovat a zaplatit za plyn přímo na L2 (to je 'Krok 3: Potvrzení převodu' níže). Tento krok musí být proveden do 7 dnů od zahájení převodu\*\*. Druhá transakce ('Krok 4: Dokončení převodu na L2') bude navíc provedena přímo na Arbitrum. Z těchto důvodů budete potřebovat nějaké ETH na peněžence Arbitrum. Pokud používáte multisig nebo smart contract účet, ETH bude muset být v běžné peněžence (EOA), kterou používáte k provádění transakcí, nikoli na samotné multisig peněžence.
+Transferring the Subgraph involves sending a transaction through the bridge, and then executing another transaction on Arbitrum. The first transaction uses ETH on mainnet, and includes some ETH to pay for gas when the message is received on L2. However, if this gas is insufficient, you will have to retry the transaction and pay for the gas directly on L2 (this is "Step 3: Confirming the transfer" below). This step **must be executed within 7 days of starting the transfer**. Moreover, the second transaction ("Step 4: Finishing the transfer on L2") will be done directly on Arbitrum. For these reasons, you will need some ETH on an Arbitrum wallet. If you're using a multisig or smart contract account, the ETH will need to be in the regular (EOA) wallet that you are using to execute the transactions, not on the multisig wallet itself.
ETH si můžete koupit na některých burzách a vybrat přímo na Arbitrum, nebo můžete použít most Arbitrum a poslat ETH z peněženky mainnetu na L2: [bridge.arbitrum.io](http://bridge.arbitrum.io). Vzhledem k tomu, že poplatky za plyn na Arbitrum jsou nižší, mělo by vám stačit jen malé množství. Doporučujeme začít na nízkém prahu (např. 0.01 ETH), aby byla vaše transakce schválena.
-## Hledání nástroje pro přenos podgrafu
+## Finding the Subgraph Transfer Tool
-Nástroj pro přenos L2 najdete při prohlížení stránky svého podgrafu v aplikaci Subgraph Studio:
+You can find the L2 Transfer Tool when you're looking at your Subgraph's page on Subgraph Studio:

-Je k dispozici také v Průzkumníku, pokud jste připojeni k peněžence, která vlastní podgraf, a na stránce tohoto podgrafu v Průzkumníku:
+It is also available on Explorer if you're connected with the wallet that owns a Subgraph and on that Subgraph's page on Explorer:

@@ -60,19 +60,19 @@ Kliknutím na tlačítko Přenést na L2 otevřete nástroj pro přenos, kde mů
## Krok 1: Zahájení přenosu
-Před zahájením převodu se musíte rozhodnout, která adresa bude vlastnit podgraf na L2 (viz výše "Výběr peněženky L2"), a důrazně doporučujeme mít na Arbitrum již přemostěné ETH pro plyn (viz výše "Příprava na převod: přemostění některých ETH").
+Before starting the transfer, you must decide which address will own the Subgraph on L2 (see "Choosing your L2 wallet" above), and it is strongly recommend having some ETH for gas already bridged on Arbitrum (see "Preparing for the transfer: bridging some ETH" above).
-Vezměte prosím na vědomí, že přenos podgrafu vyžaduje nenulové množství signálu na podgrafu se stejným účtem, který vlastní podgraf; pokud jste na podgrafu nesignalizovali, budete muset přidat trochu kurátorství (stačí přidat malé množství, například 1 GRT).
+Also please note transferring the Subgraph requires having a nonzero amount of signal on the Subgraph with the same account that owns the Subgraph; if you haven't signaled on the Subgraph you will have to add a bit of curation (adding a small amount like 1 GRT would suffice).
-Po otevření nástroje Transfer Tool budete moci do pole "Receiving wallet address" zadat adresu peněženky L2 - **ujistěte se, že jste zadali správnou adresu**. Kliknutím na Transfer Subgraph budete vyzváni k provedení transakce na vaší peněžence (všimněte si, že je zahrnuta určitá hodnota ETH, abyste zaplatili za plyn L2); tím se zahájí přenos a znehodnotí váš subgraf L1 (více podrobností o tom, co se děje v zákulisí, najdete výše v části "Porozumění tomu, co se děje se signálem, vaším subgrafem L1 a URL dotazů").
+After opening the Transfer Tool, you will be able to input the L2 wallet address into the "Receiving wallet address" field - **make sure you've entered the correct address here**. Clicking on Transfer Subgraph will prompt you to execute the transaction on your wallet (note some ETH value is included to pay for L2 gas); this will initiate the transfer and deprecate your L1 Subgraph (see "Understanding what happens with signal, your L1 Subgraph and query URLs" above for more details on what goes on behind the scenes).
-Pokud tento krok provedete, ujistěte se, že jste pokračovali až do dokončení kroku 3 za méně než 7 dní, jinak se podgraf a váš signál GRT ztratí. To je způsobeno tím, jak funguje zasílání zpráv L1-L2 na Arbitrum: zprávy, které jsou zasílány přes most, jsou "Opakovatelný tiket", které musí být provedeny do 7 dní, a počáteční provedení může vyžadovat opakování, pokud dojde ke skokům v ceně plynu na Arbitrum.
+If you execute this step, **make sure you proceed until completing step 3 in less than 7 days, or the Subgraph and your signal GRT will be lost.** This is due to how L1-L2 messaging works on Arbitrum: messages that are sent through the bridge are "retry-able tickets" that must be executed within 7 days, and the initial execution might need a retry if there are spikes in the gas price on Arbitrum.

-## Krok 2: Čekání, až se podgraf dostane do L2
+## Step 2: Waiting for the Subgraph to get to L2
-Po zahájení přenosu se musí zpráva, která odesílá podgraf L1 do L2, šířit přes můstek Arbitrum. To trvá přibližně 20 minut (můstek čeká, až bude blok mainnetu obsahující transakci "bezpečný" před případnými reorgy řetězce).
+After you start the transfer, the message that sends your L1 Subgraph to L2 must propagate through the Arbitrum bridge. This takes approximately 20 minutes (the bridge waits for the mainnet block containing the transaction to be "safe" from potential chain reorgs).
Po uplynutí této čekací doby se Arbitrum pokusí o automatické provedení přenosu na základě smluv L2.
@@ -80,7 +80,7 @@ Po uplynutí této čekací doby se Arbitrum pokusí o automatické provedení p
## Krok 3: Potvrzení převodu
-Ve většině případů se tento krok provede automaticky, protože plyn L2 obsažený v kroku 1 by měl stačit k provedení transakce, která přijímá podgraf na smlouvách Arbitrum. V některých případech je však možné, že prudký nárůst cen plynu na Arbitrum způsobí selhání tohoto automatického provedení. V takovém případě bude "ticket", který odešle subgraf na L2, čekat na vyřízení a bude vyžadovat opakování pokusu do 7 dnů.
+In most cases, this step will auto-execute as the L2 gas included in step 1 should be sufficient to execute the transaction that receives the Subgraph on the Arbitrum contracts. In some cases, however, it is possible that a spike in gas prices on Arbitrum causes this auto-execution to fail. In this case, the "ticket" that sends your Subgraph to L2 will be pending and require a retry within 7 days.
V takovém případě se musíte připojit pomocí peněženky L2, která má nějaké ETH na Arbitrum, přepnout síť peněženky na Arbitrum a kliknutím na "Confirm Transfer" zopakovat transakci.
@@ -88,33 +88,33 @@ V takovém případě se musíte připojit pomocí peněženky L2, která má n
## Krok 4: Dokončení přenosu na L2
-V tuto chvíli byly váš podgraf a GRT přijaty na Arbitrum, ale podgraf ještě není zveřejněn. Budete se muset připojit pomocí peněženky L2, kterou jste si vybrali jako přijímající peněženku, přepnout síť peněženky na Arbitrum a kliknout na "Publikovat subgraf"
+At this point, your Subgraph and GRT have been received on Arbitrum, but the Subgraph is not published yet. You will need to connect using the L2 wallet that you chose as the receiving wallet, switch your wallet network to Arbitrum, and click "Publish Subgraph."
-
+
-
+
-Tím se podgraf zveřejní, aby jej mohly začít obsluhovat indexery pracující na Arbitrum. Rovněž bude zminován kurátorský signál pomocí GRT, které byly přeneseny z L1.
+This will publish the Subgraph so that Indexers that are operating on Arbitrum can start serving it. It will also mint curation signal using the GRT that were transferred from L1.
## Krok 5: Aktualizace URL dotazu
-Váš podgraf byl úspěšně přenesen do Arbitrum! Chcete-li se na podgraf zeptat, nová URL bude:
+Your Subgraph has been successfully transferred to Arbitrum! To query the Subgraph, the new URL will be :
`https://arbitrum-gateway.thegraph.com/api/[api-key]/subgraphs/id/[l2-subgraph-id]`
-Všimněte si, že ID podgrafu v Arbitrum bude jiné než to, které jste měli v mainnetu, ale vždy ho můžete najít v Průzkumníku nebo Studiu. Jak je uvedeno výše (viz "Pochopení toho, co se děje se signálem, vaším subgrafem L1 a URL dotazů"), stará URL adresa L1 bude po krátkou dobu podporována, ale jakmile bude subgraf synchronizován na L2, měli byste své dotazy přepnout na novou adresu.
+Note that the Subgraph ID on Arbitrum will be a different than the one you had on mainnet, but you can always find it on Explorer or Studio. As mentioned above (see "Understanding what happens with signal, your L1 Subgraph and query URLs") the old L1 URL will be supported for a short while, but you should switch your queries to the new address as soon as the Subgraph has been synced on L2.
## Jak přenést kurátorství do služby Arbitrum (L2)
-## Porozumění tomu, co se děje s kurátorstvím při přenosu podgrafů do L2
+## Understanding what happens to curation on Subgraph transfers to L2
-Když vlastník podgrafu převede podgraf do Arbitrum, je veškerý signál podgrafu současně převeden na GRT. To se týká "automaticky migrovaného" signálu, tj. signálu, který není specifický pro verzi podgrafu nebo nasazení, ale který následuje nejnovější verzi podgrafu.
+When the owner of a Subgraph transfers a Subgraph to Arbitrum, all of the Subgraph's signal is converted to GRT at the same time. This applies to "auto-migrated" signal, i.e. signal that is not specific to a Subgraph version or deployment but that follows the latest version of a Subgraph.
-Tento převod ze signálu na GRT je stejný, jako kdyby vlastník podgrafu zrušil podgraf v L1. Při depreciaci nebo převodu subgrafu se současně "spálí" veškerý kurátorský signál (pomocí kurátorské vazební křivky) a výsledný GRT je držen inteligentním kontraktem GNS (tedy kontraktem, který se stará o upgrade subgrafu a automatickou migraci signálu). Každý kurátor na tomto subgrafu má tedy nárok na tento GRT úměrný množství podílů, které měl na subgrafu.
+This conversion from signal to GRT is the same as what would happen if the Subgraph owner deprecated the Subgraph in L1. When the Subgraph is deprecated or transferred, all curation signal is "burned" simultaneously (using the curation bonding curve) and the resulting GRT is held by the GNS smart contract (that is the contract that handles Subgraph upgrades and auto-migrated signal). Each Curator on that Subgraph therefore has a claim to that GRT proportional to the amount of shares they had for the Subgraph.
-Část těchto GRT odpovídající vlastníkovi podgrafu je odeslána do L2 spolu s podgrafem.
+A fraction of these GRT corresponding to the Subgraph owner is sent to L2 together with the Subgraph.
-V tomto okamžiku se za kurátorský GRT již nebudou účtovat žádné poplatky za dotazování, takže kurátoři se mohou rozhodnout, zda svůj GRT stáhnou, nebo jej přenesou do stejného podgrafu na L2, kde může být použit k ražbě nového kurátorského signálu. S tímto úkonem není třeba spěchat, protože GRT lze pomáhat donekonečna a každý dostane částku úměrnou svému podílu bez ohledu na to, kdy tak učiní.
+At this point, the curated GRT will not accrue any more query fees, so Curators can choose to withdraw their GRT or transfer it to the same Subgraph on L2, where it can be used to mint new curation signal. There is no rush to do this as the GRT can be help indefinitely and everybody gets an amount proportional to their shares, irrespective of when they do it.
## Výběr peněženky L2
@@ -130,9 +130,9 @@ Pokud používáte peněženku s chytrým kontraktem, jako je multisig (např. T
Před zahájením převodu se musíte rozhodnout, která adresa bude vlastnit kurátorství na L2 (viz výše "Výběr peněženky L2"), a doporučujeme mít nějaké ETH pro plyn již přemostěné na Arbitrum pro případ, že byste potřebovali zopakovat provedení zprávy na L2. ETH můžete nakoupit na některých burzách a vybrat si ho přímo na Arbitrum, nebo můžete použít Arbitrum bridge pro odeslání ETH z peněženky mainnetu na L2: [bridge.arbitrum.io](http://bridge.arbitrum.io) - protože poplatky za plyn na Arbitrum jsou tak nízké, mělo by vám stačit jen malé množství, např. 0,01 ETH bude pravděpodobně více než dostačující.
-Pokud byl podgraf, do kterého kurátor provádí kurátorství, převeden do L2, zobrazí se v Průzkumníku zpráva, že kurátorství provádíte do převedeného podgrafu.
+If a Subgraph that you curate to has been transferred to L2, you will see a message on Explorer telling you that you're curating to a transferred Subgraph.
-Při pohledu na stránku podgrafu můžete zvolit stažení nebo přenos kurátorství. Kliknutím na "Přenést signál do Arbitrum" otevřete nástroj pro přenos.
+When looking at the Subgraph page, you can choose to withdraw or transfer the curation. Clicking on "Transfer Signal to Arbitrum" will open the transfer tool.

@@ -162,4 +162,4 @@ V takovém případě se musíte připojit pomocí peněženky L2, která má n
## Odstranění vašeho kurátorství na L1
-Pokud nechcete posílat GRT na L2 nebo byste raději překlenuli GRT ručně, můžete si na L1 stáhnout svůj kurátorovaný GRT. Na banneru na stránce podgrafu zvolte "Withdraw Signal" a potvrďte transakci; GRT bude odeslán na vaši adresu kurátora.
+If you prefer not to send your GRT to L2, or you'd rather bridge the GRT manually, you can withdraw your curated GRT on L1. On the banner on the Subgraph page, choose "Withdraw Signal" and confirm the transaction; the GRT will be sent to your Curator address.
diff --git a/website/src/pages/cs/archived/sunrise.mdx b/website/src/pages/cs/archived/sunrise.mdx
index 71b86ac159ff..52e8c90d7708 100644
--- a/website/src/pages/cs/archived/sunrise.mdx
+++ b/website/src/pages/cs/archived/sunrise.mdx
@@ -7,61 +7,61 @@ sidebarTitle: Post-Sunrise Upgrade FAQ
## Jaký byl úsvit decentralizovaných dat?
-Úsvit decentralizovaných dat byla iniciativa, kterou vedla společnost Edge & Node. Tato iniciativa umožnila vývojářům podgrafů bezproblémově přejít na decentralizovanou síť Graf.
+The Sunrise of Decentralized Data was an initiative spearheaded by Edge & Node. This initiative enabled Subgraph developers to upgrade to The Graph’s decentralized network seamlessly.
-This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published subgraphs.
+This plan drew on previous developments from The Graph ecosystem, including an upgrade Indexer to serve queries on newly published Subgraphs.
### Co se stalo s hostovanou službou?
-Koncové body dotazů hostované služby již nejsou k dispozici a vývojáři nemohou v hostované službě nasadit nové podgrafy.
+The hosted service query endpoints are no longer available, and developers cannot deploy new Subgraphs on the hosted service.
-Během procesu aktualizace mohli vlastníci podgrafů hostovaných služeb aktualizovat své podgrafy na síť Graf. Vývojáři navíc mohli nárokovat automatickou aktualizaci podgrafů.
+During the upgrade process, owners of hosted service Subgraphs could upgrade their Subgraphs to The Graph Network. Additionally, developers were able to claim auto-upgraded Subgraphs.
### Měla tato aktualizace vliv na Podgraf Studio?
Ne, na Podgraf Studio neměl Sunrise vliv. Podgrafy byly okamžitě k dispozici pro dotazování, a to díky aktualizačnímu indexeru, který využívá stejnou infrastrukturu jako hostovaná služba.
-### Proč byly podgrafy zveřejněny na Arbitrum, začalo indexovat jinou síť?
+### Why were Subgraphs published to Arbitrum, did it start indexing a different network?
-The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that subgraphs are published to, but subgraphs can index any of the [supported networks](/supported-networks/)
+The Graph Network was initially deployed on Ethereum mainnet but was later moved to Arbitrum One in order to lower gas costs for all users. As a result, all new Subgraphs are published to The Graph Network on Arbitrum so that Indexers can support them. Arbitrum is the network that Subgraphs are published to, but Subgraphs can index any of the [supported networks](/supported-networks/)
## O Upgrade Indexer
> Aktualizace Indexer je v současné době aktivní.
-Upgrade Indexer byl implementován za účelem zlepšení zkušeností s upgradem podgrafů z hostované služby do sit' Graf a podpory nových verzí stávajících podgrafů, které dosud nebyly indexovány.
+The upgrade Indexer was implemented to improve the experience of upgrading Subgraphs from the hosted service to The Graph Network and support new versions of existing Subgraphs that had not yet been indexed.
### Co dělá upgrade Indexer?
-- Zavádí řetězce, které ještě nezískaly odměnu za indexaci v síti Graf, a zajišťuje, aby byl po zveřejnění podgrafu co nejrychleji k dispozici indexátor pro obsluhu dotazů.
+- It bootstraps chains that have yet to receive indexing rewards on The Graph Network and ensures that an Indexer is available to serve queries as quickly as possible after a Subgraph is published.
- It supports chains that were previously only available on the hosted service. Find a comprehensive list of supported chains [here](/supported-networks/).
-- Indexátoři, kteří provozují upgrade indexátoru, tak činí jako veřejnou službu pro podporu nových podgrafů a dalších řetězců, kterým chybí indexační odměny, než je Rada grafů schválí.
+- Indexers that operate an upgrade Indexer do so as a public service to support new Subgraphs and additional chains that lack indexing rewards before The Graph Council approves them.
### Proč Edge & Node spouští aktualizaci Indexer?
-Edge & Node historicky udržovaly hostovanou službu, a proto již mají synchronizovaná data pro podgrafy hostované služby.
+Edge & Node historically maintained the hosted service and, as a result, already have synced data for hosted service Subgraphs.
### Co znamená upgrade indexeru pro stávající indexery?
Řetězce, které byly dříve podporovány pouze v hostované službě, byly vývojářům zpřístupněny v síti Graf nejprve bez odměn za indexování.
-Tato akce však uvolnila poplatky za dotazy pro všechny zájemce o indexování a zvýšila počet podgrafů zveřejněných v síti Graf. V důsledku toho mají indexátoři více příležitostí indexovat a obsluhovat tyto podgrafy výměnou za poplatky za dotazy, a to ještě předtím, než jsou odměny za indexování pro řetězec povoleny.
+However, this action unlocked query fees for any interested Indexer and increased the number of Subgraphs published on The Graph Network. As a result, Indexers have more opportunities to index and serve these Subgraphs in exchange for query fees, even before indexing rewards are enabled for a chain.
-Upgrade Indexer také poskytuje komunitě Indexer informace o potenciální poptávce po podgrafech a nových řetězcích v síti grafů.
+The upgrade Indexer also provides the Indexer community with information about the potential demand for Subgraphs and new chains on The Graph Network.
### Co to znamená pro delegáti?
-The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity.
+The upgrade Indexer offers a powerful opportunity for Delegators. As it allowed more Subgraphs to be upgraded from the hosted service to The Graph Network, Delegators benefit from the increased network activity.
### Did the upgrade Indexer compete with existing Indexers for rewards?
-No, the upgrade Indexer only allocates the minimum amount per subgraph and does not collect indexing rewards.
+No, the upgrade Indexer only allocates the minimum amount per Subgraph and does not collect indexing rewards.
-It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and subgraphs.
+It operates on an “as needed” basis, serving as a fallback until sufficient service quality is achieved by at least three other Indexers in the network for respective chains and Subgraphs.
-### How does this affect subgraph developers?
+### How does this affect Subgraph developers?
-Subgraph developers can query their subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade.
+Subgraph developers can query their Subgraphs on The Graph Network almost immediately after upgrading from the hosted service or [publishing from Subgraph Studio](/subgraphs/developing/publishing/publishing-a-subgraph/), as no lead time was required for indexing. Please note that [creating a Subgraph](/developing/creating-a-subgraph/) was not impacted by this upgrade.
### How does the upgrade Indexer benefit data consumers?
@@ -71,10 +71,10 @@ Aktualizace Indexeru umožňuje podporu blockchainů v síti, které byly dřív
The upgrade Indexer prices queries at the market rate to avoid influencing the query fee market.
-### When will the upgrade Indexer stop supporting a subgraph?
+### When will the upgrade Indexer stop supporting a Subgraph?
-The upgrade Indexer supports a subgraph until at least 3 other Indexers successfully and consistently serve queries made to it.
+The upgrade Indexer supports a Subgraph until at least 3 other Indexers successfully and consistently serve queries made to it.
-Furthermore, the upgrade Indexer stops supporting a subgraph if it has not been queried in the last 30 days.
+Furthermore, the upgrade Indexer stops supporting a Subgraph if it has not been queried in the last 30 days.
-Other Indexers are incentivized to support subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it.
+Other Indexers are incentivized to support Subgraphs with ongoing query volume. The query volume to the upgrade Indexer should trend towards zero, as it has a small allocation size, and other Indexers should be chosen for queries ahead of it.
diff --git a/website/src/pages/cs/global.json b/website/src/pages/cs/global.json
index c431472eb4f5..59211940d133 100644
--- a/website/src/pages/cs/global.json
+++ b/website/src/pages/cs/global.json
@@ -6,6 +6,7 @@
"subgraphs": "Podgrafy",
"substreams": "Substreams",
"sps": "Substreams-Powered Subgraphs",
+ "tokenApi": "Token API",
"indexing": "Indexing",
"resources": "Resources",
"archived": "Archived"
@@ -24,9 +25,51 @@
"linkToThisSection": "Link to this section"
},
"content": {
- "note": "Note",
+ "callout": {
+ "note": "Note",
+ "tip": "Tip",
+ "important": "Important",
+ "warning": "Warning",
+ "caution": "Caution"
+ },
"video": "Video"
},
+ "openApi": {
+ "parameters": {
+ "pathParameters": "Path Parameters",
+ "queryParameters": "Query Parameters",
+ "headerParameters": "Header Parameters",
+ "cookieParameters": "Cookie Parameters",
+ "parameter": "Parameter",
+ "description": "Popis",
+ "value": "Value",
+ "required": "Required",
+ "deprecated": "Deprecated",
+ "defaultValue": "Default value",
+ "minimumValue": "Minimum value",
+ "maximumValue": "Maximum value",
+ "acceptedValues": "Accepted values",
+ "acceptedPattern": "Accepted pattern",
+ "format": "Format",
+ "serializationFormat": "Serialization format"
+ },
+ "request": {
+ "label": "Test this endpoint",
+ "noCredentialsRequired": "No credentials required",
+ "send": "Send Request"
+ },
+ "responses": {
+ "potentialResponses": "Potential Responses",
+ "status": "Status",
+ "description": "Popis",
+ "liveResponse": "Live Response",
+ "example": "Příklad"
+ },
+ "errors": {
+ "invalidApi": "Could not retrieve API {0}.",
+ "invalidOperation": "Could not retrieve operation {0} in API {1}."
+ }
+ },
"notFound": {
"title": "Oops! This page was lost in space...",
"subtitle": "Check if you’re using the right address or explore our website by clicking on the link below.",
diff --git a/website/src/pages/cs/index.json b/website/src/pages/cs/index.json
index 2cea19c4ff1a..545b2b717b56 100644
--- a/website/src/pages/cs/index.json
+++ b/website/src/pages/cs/index.json
@@ -7,7 +7,7 @@
"cta2": "Build your first subgraph"
},
"products": {
- "title": "The Graph’s Products",
+ "title": "The Graph's Products",
"description": "Choose a solution that fits your needs—interact with blockchain data your way.",
"subgraphs": {
"title": "Podgrafy",
@@ -21,7 +21,7 @@
},
"sps": {
"title": "Substreams-Powered Subgraphs",
- "description": "Boost your subgraph’s efficiency and scalability by using Substreams.",
+ "description": "Boost your subgraph's efficiency and scalability by using Substreams.",
"cta": "Set up a Substreams-powered subgraph"
},
"graphNode": {
@@ -37,10 +37,86 @@
},
"supportedNetworks": {
"title": "Podporované sítě",
+ "details": "Network Details",
+ "services": "Services",
+ "type": "Typ",
+ "protocol": "Protocol",
+ "identifier": "Identifier",
+ "chainId": "Chain ID",
+ "nativeCurrency": "Native Currency",
+ "docs": "Dokumenty",
+ "shortName": "Short Name",
+ "guides": "Guides",
+ "search": "Search networks",
+ "showTestnets": "Show Testnets",
+ "loading": "Loading...",
+ "infoTitle": "Info",
+ "infoText": "Boost your developer experience by enabling The Graph's indexing network.",
+ "infoLink": "Integrate new network",
"description": {
"base": "The Graph supports {0}. To add a new network, {1}",
"networks": "networks",
"completeThisForm": "complete this form"
+ },
+ "emptySearch": {
+ "title": "No networks found",
+ "description": "No networks match your search for \"{0}\"",
+ "clearSearch": "Clear search",
+ "showTestnets": "Show testnets"
+ },
+ "tableHeaders": {
+ "name": "Name",
+ "id": "ID",
+ "subgraphs": "Podgrafy",
+ "substreams": "Substreams",
+ "firehose": "Firehose",
+ "tokenapi": "Token API"
+ }
+ },
+ "networkGuides": {
+ "evm": {
+ "subgraphQuickStart": {
+ "title": "Subgraph quick start",
+ "description": "Kickstart your journey into subgraph development."
+ },
+ "substreams": {
+ "title": "Substreams",
+ "description": "Stream high-speed data for real-time indexing."
+ },
+ "timeseries": {
+ "title": "Timeseries & Aggregations",
+ "description": "Learn to track metrics like daily volumes or user growth."
+ },
+ "advancedFeatures": {
+ "title": "Advanced subgraph features",
+ "description": "Leverage features like custom data sources, event handlers, and topic filters."
+ },
+ "billing": {
+ "title": "Fakturace",
+ "description": "Optimize costs and manage billing efficiently."
+ }
+ },
+ "nonEvm": {
+ "officialDocs": {
+ "title": "Official Substreams docs",
+ "description": "Stream high-speed data for real-time indexing."
+ },
+ "spsIntro": {
+ "title": "Substreams-powered Subgraphs Intro",
+ "description": "Supercharge your subgraph's efficiency with Substreams."
+ },
+ "substreamsDev": {
+ "title": "Substreams.dev",
+ "description": "Access tutorials, templates, and documentation to build custom data modules."
+ },
+ "substreamsStarter": {
+ "title": "Substreams starter",
+ "description": "Leverage this boilerplate to create your first Substreams module."
+ },
+ "substreamsRepo": {
+ "title": "Substreams repo",
+ "description": "Study, contribute to, or customize the core Substreams framework."
+ }
}
},
"guides": {
@@ -80,15 +156,15 @@
"watchOnYouTube": "Watch on YouTube",
"theGraphExplained": {
"title": "The Graph Explained In 1 Minute",
- "description": "What is The Graph? How does it work? Why does it matter so much to web3 developers? Learn how and why The Graph is the backbone of web3 in this short, non-technical video."
+ "description": "Learn how and why The Graph is the backbone of web3 in this short, non-technical video."
},
"whatIsDelegating": {
"title": "What is Delegating?",
- "description": "Delegators are key participants who help secure The Graph by staking their GRT tokens to Indexers. This video explains key concepts to understand before delegating."
+ "description": "This video explains key concepts to understand before delegating, a form of staking that helps secure The Graph."
},
"howToIndexSolana": {
"title": "How to Index Solana with a Substreams-powered Subgraph",
- "description": "If you’re familiar with subgraphs, discover how Substreams offer a different approach for key use cases. This video walks you through the process of building your first Substreams-powered subgraph."
+ "description": "If you're familiar with Subgraphs, discover how Substreams offer a different approach for key use cases."
}
},
"time": {
diff --git a/website/src/pages/cs/indexing/chain-integration-overview.mdx b/website/src/pages/cs/indexing/chain-integration-overview.mdx
index e048421d7ad9..a2f1eed58864 100644
--- a/website/src/pages/cs/indexing/chain-integration-overview.mdx
+++ b/website/src/pages/cs/indexing/chain-integration-overview.mdx
@@ -36,7 +36,7 @@ Tento proces souvisí se službou Datová služba podgrafů a vztahuje se pouze
### 2. Co se stane, když podpora Firehose & Substreams přijde až poté, co bude síť podporována v mainnet?
-To by mělo vliv pouze na podporu protokolu pro indexování odměn na podgrafech s podsílou. Novou implementaci Firehose by bylo třeba testovat v testnetu podle metodiky popsané pro fázi 2 v tomto GIP. Podobně, za předpokladu, že implementace bude výkonná a spolehlivá, by bylo nutné provést PR na [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) (`Substreams data sources` Subgraph Feature) a také nový GIP pro podporu protokolu pro indexování odměn. PR a GIP může vytvořit kdokoli; nadace by pomohla se schválením Radou.
+This would only impact protocol support for indexing rewards on Substreams-powered Subgraphs. The new Firehose implementation would need testing on testnet, following the methodology outlined for Stage 2 in this GIP. Similarly, assuming the implementation is performant and reliable, a PR on the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) would be required (`Substreams data sources` Subgraph Feature), as well as a new GIP for protocol support for indexing rewards. Anyone can create the PR and GIP; the Foundation would help with Council approval.
### 3. How much time will the process of reaching full protocol support take?
diff --git a/website/src/pages/cs/indexing/new-chain-integration.mdx b/website/src/pages/cs/indexing/new-chain-integration.mdx
index 5eb78fc9efbd..2954c7f0b494 100644
--- a/website/src/pages/cs/indexing/new-chain-integration.mdx
+++ b/website/src/pages/cs/indexing/new-chain-integration.mdx
@@ -2,7 +2,7 @@
title: New Chain Integration
---
-Chains can bring subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies:
+Chains can bring Subgraph support to their ecosystem by starting a new `graph-node` integration. Subgraphs are a powerful indexing tool opening a world of possibilities for developers. Graph Node already indexes data from the chains listed here. If you are interested in a new integration, there are 2 integration strategies:
1. **EVM JSON-RPC**
2. **Firehose**: All Firehose integration solutions include Substreams, a large-scale streaming engine based off Firehose with native `graph-node` support, allowing for parallelized transforms.
@@ -25,7 +25,7 @@ For Graph Node to be able to ingest data from an EVM chain, the RPC node must ex
- `eth_getBlockByHash`
- `net_version`
- `eth_getTransactionReceipt`, in a JSON-RPC batch request
-- `trace_filter` *(limited tracing and optionally required for Graph Node)*
+- `trace_filter` _(limited tracing and optionally required for Graph Node)_
### 2. Firehose Integration
@@ -47,15 +47,15 @@ For EVM chains, there exists a deeper level of data that can be achieved through
## EVM considerations - Difference between JSON-RPC & Firehose
-While the JSON-RPC and Firehose are both suitable for subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing.
+While the JSON-RPC and Firehose are both suitable for Subgraphs, a Firehose is always required for developers wanting to build with [Substreams](https://substreams.streamingfast.io). Supporting Substreams allows developers to build [Substreams-powered Subgraphs](/subgraphs/cookbook/substreams-powered-subgraphs/) for the new chain, and has the potential to improve the performance of your Subgraphs. Additionally, Firehose — as a drop-in replacement for the JSON-RPC extraction layer of `graph-node` — reduces by 90% the number of RPC calls required for general indexing.
-- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all subgraphs it processes.
+- All those `getLogs` calls and roundtrips get replaced by a single stream arriving into the heart of `graph-node`; a single block model for all Subgraphs it processes.
-> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers)
+> NOTE: A Firehose-based integration for EVM chains will still require Indexers to run the chain's archive RPC node to properly index Subgraphs. This is due to the Firehose's inability to provide smart contract state typically accessible by the `eth_call` RPC method. (It's worth reminding that `eth_calls` are not a good practice for developers)
## Config uzlu grafu
-Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a subgraph.
+Configuring Graph Node is as easy as preparing your local environment. Once your local environment is set, you can test the integration by locally deploying a Subgraph.
1. [Clone Graph Node](https://github.com/graphprotocol/graph-node)
@@ -67,4 +67,4 @@ Configuring Graph Node is as easy as preparing your local environment. Once your
## Substreams-powered Subgraphs
-For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself.
+For StreamingFast-led Firehose/Substreams integrations, basic support for foundational Substreams modules (e.g. decoded transactions, logs and smart-contract events) and Substreams codegen tools are included. These tools enable the ability to enable [Substreams-powered Subgraphs](/substreams/sps/introduction/). Follow the [How-To Guide](https://substreams.streamingfast.io/documentation/how-to-guides/intro-your-first-application) and run `substreams codegen subgraph` to experience the codegen tools for yourself.
diff --git a/website/src/pages/cs/indexing/overview.mdx b/website/src/pages/cs/indexing/overview.mdx
index 52eda54899f1..47b88923efa8 100644
--- a/website/src/pages/cs/indexing/overview.mdx
+++ b/website/src/pages/cs/indexing/overview.mdx
@@ -7,7 +7,7 @@ Indexery jsou operátoři uzlů v síti Graf, kteří sázejí graf tokeny (GRT)
GRT, který je v protokolu založen, podléhá období rozmrazování a může být zkrácen, pokud jsou indexátory škodlivé a poskytují aplikacím nesprávná data nebo pokud indexují nesprávně. Indexátoři také získávají odměny za delegované sázky od delegátů, aby přispěli do sítě.
-Indexátory vybírají podgrafy k indexování na základě signálu kurátorů podgrafů, přičemž kurátoři sázejí na GRT, aby určili, které podgrafy jsou vysoce kvalitní a měly by být upřednostněny. Spotřebitelé (např. aplikace) mohou také nastavit parametry, podle kterých indexátoři zpracovávají dotazy pro jejich podgrafy, a nastavit preference pro stanovení ceny poplatků za dotazy.
+Indexers select Subgraphs to index based on the Subgraph’s curation signal, where Curators stake GRT in order to indicate which Subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their Subgraphs and set preferences for query fee pricing.
## FAQ
@@ -19,17 +19,17 @@ The minimum stake for an Indexer is currently set to 100K GRT.
**Query fee rebates** - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity.
-**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network.
+**Indexing rewards** - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing Subgraph deployments for the network.
### How are indexing rewards distributed?
-Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.**
+Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across Subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that Subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.**
Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack.
### What is a proof of indexing (POI)?
-POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block.
+POIs are used in the network to verify that an Indexer is indexing the Subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific Subgraph deployment up to and including that block.
### When are indexing rewards distributed?
@@ -41,7 +41,7 @@ The RewardsManager contract has a read-only [getRewards](https://github.com/grap
Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps:
-1. Query the [mainnet subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations:
+1. Query the [mainnet Subgraph](https://thegraph.com/explorer/subgraphs/9Co7EQe5PgW3ugCUJrJgRv4u9zdEuDJf8NvMWftNsBH8?view=Query&chain=arbitrum-one) to get the IDs for all active allocations:
```graphql
query indexerAllocations {
@@ -91,31 +91,31 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that
- **indexingRewardCut** - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%.
-### How do Indexers know which subgraphs to index?
+### How do Indexers know which Subgraphs to index?
-Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network:
+Indexers may differentiate themselves by applying advanced techniques for making Subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate Subgraphs in the network:
-- **Curation signal** - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up.
+- **Curation signal** - The proportion of network curation signal applied to a particular Subgraph is a good indicator of the interest in that Subgraph, especially during the bootstrap phase when query volume is ramping up.
-- **Query fees collected** - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand.
+- **Query fees collected** - The historical data for volume of query fees collected for a specific Subgraph is a good indicator of future demand.
-- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply.
+- **Amount staked** - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific Subgraphs can allow an Indexer to monitor the supply side for Subgraph queries to identify Subgraphs that the network is showing confidence in or Subgraphs that may show a need for more supply.
-- **Subgraphs with no indexing rewards** - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards.
+- **Subgraphs with no indexing rewards** - Some Subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a Subgraph if it is not generating indexing rewards.
### What are the hardware requirements?
-- **Small** - Enough to get started indexing several subgraphs, will likely need to be expanded.
+- **Small** - Enough to get started indexing several Subgraphs, will likely need to be expanded.
- **Standard** - Default setup, this is what is used in the example k8s/terraform deployment manifests.
-- **Medium** - Production Indexer supporting 100 subgraphs and 200-500 requests per second.
-- **Large** - Prepared to index all currently used subgraphs and serve requests for the related traffic.
+- **Medium** - Production Indexer supporting 100 Subgraphs and 200-500 requests per second.
+- **Large** - Prepared to index all currently used Subgraphs and serve requests for the related traffic.
-| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
-| --- | :-: | :-: | :-: | :-: | :-: |
-| Small | 4 | 8 | 1 | 4 | 16 |
-| Standard | 8 | 30 | 1 | 12 | 48 |
-| Medium | 16 | 64 | 2 | 32 | 64 |
-| Large | 72 | 468 | 3.5 | 48 | 184 |
+| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
+| -------- | :------------------: | :---------------------------: | :-------------------------: | :-------------: | :----------------------: |
+| Small | 4 | 8 | 1 | 4 | 16 |
+| Standard | 8 | 30 | 1 | 12 | 48 |
+| Medium | 16 | 64 | 2 | 32 | 64 |
+| Large | 72 | 468 | 3.5 | 48 | 184 |
### What are some basic security precautions an Indexer should take?
@@ -125,17 +125,17 @@ Indexers may differentiate themselves by applying advanced techniques for making
## Infrastructure
-At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network.
+At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a Subgraph definition and serves it as a [GraphQL API](/about/#how-the-graph-works). The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network.
-- **PostgreSQL database** - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions.
+- **PostgreSQL database** - The main store for the Graph Node, this is where Subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions.
-- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
+- **Data endpoint** - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain Subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.
-- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
+- **IPFS node (version less than 5)** - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
- **Indexer service** - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway.
-- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations.
+- **Indexer agent** - Facilitates the Indexers interactions onchain including registering on the network, managing Subgraph deployments to its Graph Node/s, and managing allocations.
- **Prometheus metrics server** - The Graph Node and Indexer components log their metrics to the metrics server.
@@ -147,20 +147,20 @@ Note: To support agile scaling, it is recommended that query and indexing concer
#### Uzel Graf
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Service
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP server (for paid subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
-| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ---------------------------------------------------- | ----------------------------------------------------------- | --------------- | ---------------------- |
+| 7600 | GraphQL HTTP server (for paid Subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | \--port | `INDEXER_SERVICE_PORT` |
+| 7300 | Prometheus metrics | /metrics | \--metrics-port | - |
#### Indexer Agent
@@ -295,7 +295,7 @@ Deploy all resources with `kubectl apply -k $dir`.
### Uzel Graf
-[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint.
+[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use Subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint.
#### Getting started from source
@@ -365,9 +365,9 @@ docker-compose up
To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components:
-- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each.
+- **Indexer agent** - The agent monitors the network and the Indexer's own infrastructure and manages which Subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each.
-- **Indexer service** - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways.
+- **Indexer service** - The only component that needs to be exposed externally, the service passes on Subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways.
- **Indexer CLI** - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules.
@@ -525,7 +525,7 @@ graph indexer status
#### Indexer management using Indexer CLI
-The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution.
+The suggested tool for interacting with the **Indexer Management API** is the **Indexer CLI**, an extension to the **Graph CLI**. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are **allocation management** mode and **indexing rules**. Under auto mode, an Indexer can use **indexing rules** to apply their specific strategy for picking Subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using **actions queue** and explicitly approve them before they get executed. Under oversight mode, **indexing rules** are used to populate **actions queue** and also require explicit approval for execution.
#### Usage
@@ -537,7 +537,7 @@ The **Indexer CLI** connects to the Indexer agent, typically through port-forwar
- `graph indexer rules set [options] ...` - Set one or more indexing rules.
-- `graph indexer rules start [options] ` - Start indexing a subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed.
+- `graph indexer rules start [options] ` - Start indexing a Subgraph deployment if available and set its `decisionBasis` to `always`, so the Indexer agent will always choose to index it. If the global rule is set to always then all available Subgraphs on the network will be indexed.
- `graph indexer rules stop [options] ` - Stop indexing a deployment and set its `decisionBasis` to never, so it will skip this deployment when deciding on deployments to index.
@@ -561,9 +561,9 @@ All commands which display rules in the output can choose between the supported
#### Indexing rules
-Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing.
+Indexing rules can either be applied as global defaults or for specific Subgraph deployments using their IDs. The `deployment` and `decisionBasis` fields are mandatory, while all other fields are optional. When an indexing rule has `rules` as the `decisionBasis`, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the Subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing.
-For example, if the global rule has a `minStake` of **5** (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`.
+For example, if the global rule has a `minStake` of **5** (GRT), any Subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include `maxAllocationPercentage`, `minSignal`, `maxSignal`, `minStake`, and `minAverageQueryFees`.
Data model:
@@ -679,7 +679,7 @@ graph indexer actions execute approve
Note that supported action types for allocation management have different input requirements:
-- `Allocate` - allocate stake to a specific subgraph deployment
+- `Allocate` - allocate stake to a specific Subgraph deployment
- required action params:
- deploymentID
@@ -694,7 +694,7 @@ Note that supported action types for allocation management have different input
- poi
- force (forces using the provided POI even if it doesn’t match what the graph-node provides)
-- `Reallocate` - atomically close allocation and open a fresh allocation for the same subgraph deployment
+- `Reallocate` - atomically close allocation and open a fresh allocation for the same Subgraph deployment
- required action params:
- allocationID
@@ -706,7 +706,7 @@ Note that supported action types for allocation management have different input
#### Cost models
-Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
+Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each Subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.
#### Agora
@@ -782,9 +782,9 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/indexi
6. Call `stake()` to stake GRT in the protocol.
-7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address.
+7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on Subgraphs and serving (paid) queries. In order to set the operator call `setOperator()` with the operator address.
-8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call `setDelegationParameters()`. The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set `thecooldownBlocks` period to 500 blocks.
+8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their `indexingRewardCut` (parts per million), `queryFeeCut` (parts per million), and `cooldownBlocks` (number of blocks). To do so call `setDelegationParameters()`. The following example sets the `queryFeeCut` to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the `indexingRewardCut` to distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set the `cooldownBlocks` period to 500 blocks.
```
setDelegationParameters(950000, 600000, 500)
@@ -810,8 +810,8 @@ To set the delegation parameters using Graph Explorer interface, follow these st
After being created by an Indexer a healthy allocation goes through two states.
-- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules.
+- **Active** - Once an allocation is created onchain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a Subgraph deployment, which allows them to claim indexing rewards and serve queries for that Subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules.
- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/indexing/overview/#how-are-indexing-rewards-distributed)).
-Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically.
+Indexers are recommended to utilize offchain syncing functionality to sync Subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for Subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically.
diff --git a/website/src/pages/cs/indexing/supported-network-requirements.mdx b/website/src/pages/cs/indexing/supported-network-requirements.mdx
index a81118cec231..e3d76e7c7767 100644
--- a/website/src/pages/cs/indexing/supported-network-requirements.mdx
+++ b/website/src/pages/cs/indexing/supported-network-requirements.mdx
@@ -2,17 +2,17 @@
title: Supported Network Requirements
---
-| Síť | Guides | System Requirements | Indexing Rewards |
-| --- | --- | --- | :-: |
-| Arbitrum | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/baremetal) [Docker Guide](https://docs.infradao.com/archive-nodes-101/arbitrum/docker) | 4+ core CPU Ubuntu 22.04 16GB+ RAM >= 8 TiB NVMe SSD _last updated August 2023_ | ✅ |
-| Avalanche | [Docker Guide](https://docs.infradao.com/archive-nodes-101/avalanche/docker) | 4 core / 8 threads CPU Ubuntu 22.04 16GB+ RAM >= 5 TiB NVMe SSD _last updated August 2023_ | ✅ |
-| Base | [Erigon Baremetal Guide](https://docs.infradao.com/archive-nodes-101/base/erigon/baremetal)
[GETH Baremetal Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/baremetal) [GETH Docker Guide](https://docs.infradao.com/archive-nodes-101/optimism/geth/docker) | 4 core / 8 threads CPU Ubuntu 22.04 16GB+ RAM >= 8 TiB NVMe SSD _last updated August 2023_ | ✅ |
+| Polygon | [Docker Guide](https://docs.infradao.com/archive-nodes-101/polygon/docker) | 16 core CPU Ubuntu 22.04 32GB+ RAM >= 10 TiB NVMe SSD _last updated August 2023_ | ✅ |
+| Scroll | [Baremetal Guide](https://docs.infradao.com/archive-nodes-101/scroll/baremetal) [Docker Guide](https://docs.infradao.com/archive-nodes-101/scroll/docker) | 4 core / 8 threads CPU Debian 12 16GB+ RAM >= 1 TiB NVMe SSD _last updated 3rd April 2024_ | ✅ |
diff --git a/website/src/pages/cs/indexing/tap.mdx b/website/src/pages/cs/indexing/tap.mdx
index f8d028634016..6063720aca9d 100644
--- a/website/src/pages/cs/indexing/tap.mdx
+++ b/website/src/pages/cs/indexing/tap.mdx
@@ -1,21 +1,21 @@
---
-title: TAP Migration Guide
+title: GraphTally Guide
---
-Learn about The Graph’s new payment system, **Timeline Aggregation Protocol, TAP**. This system provides fast, efficient microtransactions with minimized trust.
+Learn about The Graph’s new payment system, **GraphTally** [(previously Timeline Aggregation Protocol)](https://docs.rs/tap_core/latest/tap_core/index.html). This system provides fast, efficient microtransactions with minimized trust.
## Přehled
-[TAP](https://docs.rs/tap_core/latest/tap_core/index.html) is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features:
+GraphTally is a drop-in replacement to the Scalar payment system currently in place. It provides the following key features:
- Efficiently handles micropayments.
- Adds a layer of consolidations to onchain transactions and costs.
- Allows Indexers control of receipts and payments, guaranteeing payment for queries.
- It enables decentralized, trustless gateways and improves `indexer-service` performance for multiple senders.
-## Specifics
+### Specifics
-TAP allows a sender to make multiple payments to a receiver, **TAP Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process.
+GraphTally allows a sender to make multiple payments to a receiver, **Receipts**, which aggregates these payments into a single payment, a **Receipt Aggregate Voucher**, also known as a **RAV**. This aggregated payment can then be verified on the blockchain, reducing the number of transactions and simplifying the payment process.
For each query, the gateway will send you a `signed receipt` that is stored on your database. Then, these queries will be aggregated by a `tap-agent` through a request. Afterwards, you’ll receive a RAV. You can update a RAV by sending it with newer receipts and this will generate a new RAV with an increased value.
@@ -59,14 +59,14 @@ As long as you run `tap-agent` and `indexer-agent`, everything will be executed
| Signers | `0xfF4B7A5EfD00Ff2EC3518D4F250A27e4c29A2211` | `0xFb142dE83E261e43a81e9ACEADd1c66A0DB121FE` |
| Aggregator | `https://tap-aggregator.network.thegraph.com` | `https://tap-aggregator.testnet.thegraph.com` |
-### Požadavky
+### Prerequisites
-In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query TAP updates. You can use The Graph Network to query or host yourself on your `graph-node`.
+In addition to the typical requirements to run an indexer, you’ll need a `tap-escrow-subgraph` endpoint to query updates. You can use The Graph Network to query or host yourself on your `graph-node`.
-- [Graph TAP Arbitrum Sepolia subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD)
-- [Graph TAP Arbitrum One subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1)
+- [Graph TAP Arbitrum Sepolia Subgraph (for The Graph testnet)](https://thegraph.com/explorer/subgraphs/7ubx365MiqBH5iUz6XWXWT8PTof5BVAyEzdb8m17RvbD)
+- [Graph TAP Arbitrum One Subgraph (for The Graph mainnet)](https://thegraph.com/explorer/subgraphs/4sukbNVTzGELnhdnpyPqsf1QqtzNHEYKKmJkgaT8z6M1)
-> Note: `indexer-agent` does not currently handle the indexing of this subgraph like it does for the network subgraph deployment. As a result, you have to index it manually.
+> Note: `indexer-agent` does not currently handle the indexing of this Subgraph like it does for the network Subgraph deployment. As a result, you have to index it manually.
## Migration Guide
@@ -79,7 +79,7 @@ The required software version can be found [here](https://github.com/graphprotoc
1. **Indexer Agent**
- Follow the [same process](https://github.com/graphprotocol/indexer/pkgs/container/indexer-agent#graph-protocol-indexer-components).
- - Give the new argument `--tap-subgraph-endpoint` to activate the new TAP codepaths and enable redeeming of TAP RAVs.
+ - Give the new argument `--tap-subgraph-endpoint` to activate the new GraphTally codepaths and enable redeeming of RAVs.
2. **Indexer Service**
@@ -128,18 +128,18 @@ query_url = ""
status_url = ""
[subgraphs.network]
-# Query URL for the Graph Network subgraph.
+# Query URL for the Graph Network Subgraph.
query_url = ""
# Optional, deployment to look for in the local `graph-node`, if locally indexed.
-# Locally indexing the subgraph is recommended.
+# Locally indexing the Subgraph is recommended.
# NOTE: Use `query_url` or `deployment_id` only
deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
[subgraphs.escrow]
-# Query URL for the Escrow subgraph.
+# Query URL for the Escrow Subgraph.
query_url = ""
# Optional, deployment to look for in the local `graph-node`, if locally indexed.
-# Locally indexing the subgraph is recommended.
+# Locally indexing the Subgraph is recommended.
# NOTE: Use `query_url` or `deployment_id` only
deployment_id = "Qmaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
diff --git a/website/src/pages/cs/indexing/tooling/graph-node.mdx b/website/src/pages/cs/indexing/tooling/graph-node.mdx
index 88ddb88813fb..3b71056d71f9 100644
--- a/website/src/pages/cs/indexing/tooling/graph-node.mdx
+++ b/website/src/pages/cs/indexing/tooling/graph-node.mdx
@@ -2,31 +2,31 @@
title: Uzel Graf
---
-Graf Uzel je komponenta, která indexuje podgrafy a zpřístupňuje výsledná data k dotazování prostřednictvím rozhraní GraphQL API. Jako taková je ústředním prvkem zásobníku indexeru a její správná činnost je pro úspěšný provoz indexeru klíčová.
+Graph Node is the component which indexes Subgraphs, and makes the resulting data available to query via a GraphQL API. As such it is central to the indexer stack, and correct operation of Graph Node is crucial to running a successful indexer.
This provides a contextual overview of Graph Node, and some of the more advanced options available to indexers. Detailed documentation and instructions can be found in the [Graph Node repository](https://github.com/graphprotocol/graph-node).
## Uzel Graf
-[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing subgraphs and making indexed data available to query.
+[Graph Node](https://github.com/graphprotocol/graph-node) is the reference implementation for indexing Subgraphs on The Graph Network, connecting to blockchain clients, indexing Subgraphs and making indexed data available to query.
Graph Node (and the whole indexer stack) can be run on bare metal, or in a cloud environment. This flexibility of the central indexing component is crucial to the robustness of The Graph Protocol. Similarly, Graph Node can be [built from source](https://github.com/graphprotocol/graph-node), or indexers can use one of the [provided Docker Images](https://hub.docker.com/r/graphprotocol/graph-node).
### Databáze PostgreSQL
-Hlavní úložiště pro uzel Graf Uzel, kde jsou uložena data podgrafů, metadata o podgraf a síťová data týkající se podgrafů, jako je bloková cache a cache eth_call.
+The main store for the Graph Node, this is where Subgraph data is stored, as well as metadata about Subgraphs, and Subgraph-agnostic network data such as the block cache, and eth_call cache.
### Síťoví klienti
Aby mohl uzel Graph Node indexovat síť, potřebuje přístup k síťovému klientovi prostřednictvím rozhraní API JSON-RPC kompatibilního s EVM. Toto RPC se může připojit k jedinému klientovi nebo může jít o složitější nastavení, které vyrovnává zátěž mezi více klienty.
-While some subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)).
+While some Subgraphs may just require a full node, some may have indexing features which require additional RPC functionality. Specifically Subgraphs which make `eth_calls` as part of indexing will require an archive node which supports [EIP-1898](https://eips.ethereum.org/EIPS/eip-1898), and Subgraphs with `callHandlers`, or `blockHandlers` with a `call` filter, require `trace_filter` support ([see trace module documentation here](https://openethereum.github.io/JSONRPC-trace-module)).
**Network Firehoses** - a Firehose is a gRPC service providing an ordered, yet fork-aware, stream of blocks, developed by The Graph's core developers to better support performant indexing at scale. This is not currently an Indexer requirement, but Indexers are encouraged to familiarise themselves with the technology, ahead of full network support. Learn more about the Firehose [here](https://firehose.streamingfast.io/).
### IPFS uzly
-Metadata nasazení podgrafů jsou uložena v síti IPFS. Uzel Graf přistupuje během nasazení podgrafu především k uzlu IPFS, aby načetl manifest podgrafu a všechny propojené soubory. Síťové indexery nemusí hostit vlastní uzel IPFS. Uzel IPFS pro síť je hostován na adrese https://ipfs.network.thegraph.com.
+Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during Subgraph deployment to fetch the Subgraph manifest and all linked files. Network indexers do not need to host their own IPFS node. An IPFS node for the network is hosted at https://ipfs.network.thegraph.com.
### Metrický server Prometheus
@@ -77,19 +77,19 @@ A complete Kubernetes example configuration can be found in the [indexer reposit
Když je Graf Uzel spuštěn, zpřístupňuje následující ports:
-| Port | Purpose | Routes | CLI Argument | Environment Variable |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
-| 8001 | GraphQL WS (for subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
-| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
-| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
-| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
+| Port | Purpose | Routes | CLI Argument | Environment Variable |
+| ---- | ----------------------------------------------- | ---------------------------------------------- | ------------------ | -------------------- |
+| 8000 | GraphQL HTTP server (for Subgraph queries) | /subgraphs/id/... /subgraphs/name/.../... | \--http-port | - |
+| 8001 | GraphQL WS (for Subgraph subscriptions) | /subgraphs/id/... /subgraphs/name/.../... | \--ws-port | - |
+| 8020 | JSON-RPC (for managing deployments) | / | \--admin-port | - |
+| 8030 | Subgraph indexing status API | /graphql | \--index-node-port | - |
+| 8040 | Prometheus metrics | /metrics | \--metrics-port | - |
> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint.
## Pokročilá konfigurace uzlu Graf
-V nejjednodušším případě lze Graf Uzel provozovat s jednou instancí Graf Uzel, jednou databází PostgreSQL, uzlem IPFS a síťovými klienty podle potřeby indexovaných podgrafů.
+At its simplest, Graph Node can be operated with a single instance of Graph Node, a single PostgreSQL database, an IPFS node, and the network clients as required by the Subgraphs to be indexed.
This setup can be scaled horizontally, by adding multiple Graph Nodes, and multiple databases to support those Graph Nodes. Advanced users may want to take advantage of some of the horizontal scaling capabilities of Graph Node, as well as some of the more advanced configuration options, via the `config.toml` file and Graph Node's environment variables.
@@ -114,13 +114,13 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https:
#### Více uzlů graf
-Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules).
+Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting Subgraphs across nodes with [deployment rules](#deployment-rules).
> Všimněte si, že více graf uzlů lze nakonfigurovat tak, aby používaly stejnou databázi, kterou lze horizontálně škálovat pomocí sharding.
#### Pravidla nasazení
-Given multiple Graph Nodes, it is necessary to manage deployment of new subgraphs so that the same subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the subgraph name and the network that the deployment is indexing in order to make a decision.
+Given multiple Graph Nodes, it is necessary to manage deployment of new Subgraphs so that the same Subgraph isn't being indexed by two different nodes, which would lead to collisions. This can be done by using deployment rules, which can also specify which `shard` a Subgraph's data should be stored in, if database sharding is being used. Deployment rules can match on the Subgraph name and the network that the deployment is indexing in order to make a decision.
Příklad konfigurace pravidla nasazení:
@@ -138,7 +138,7 @@ indexers = [ "index_node_kovan_0" ]
match = { network = [ "xdai", "poa-core" ] }
indexers = [ "index_node_other_0" ]
[[deployment.rule]]
-# There's no 'match', so any subgraph matches
+# There's no 'match', so any Subgraph matches
shards = [ "sharda", "shardb" ]
indexers = [
"index_node_community_0",
@@ -167,11 +167,11 @@ Každý uzel, jehož --node-id odpovídá regulárnímu výrazu, bude nastaven t
Pro většinu případů použití postačuje k podpoře instance graf uzlu jedna databáze Postgres. Pokud instance graf uzlu přeroste rámec jedné databáze Postgres, je možné rozdělit ukládání dat grafového uzlu do více databází Postgres. Všechny databáze dohromady tvoří úložiště instance graf uzlu. Každá jednotlivá databáze se nazývá shard.
-Shards can be used to split subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more subgraphs are being indexed.
+Shards can be used to split Subgraph deployments across multiple databases, and can also be used to use replicas to spread query load across databases. This includes configuring the number of available database connections each `graph-node` should keep in its connection pool for each database, which becomes increasingly important as more Subgraphs are being indexed.
Sharding se stává užitečným, když vaše stávající databáze nedokáže udržet krok se zátěží, kterou na ni Graf Uzel vyvíjí, a když už není možné zvětšit velikost databáze.
-> Obecně je lepší vytvořit jednu co největší databázi, než začít s oddíly. Jednou z výjimek jsou případy, kdy je provoz dotazů rozdělen velmi nerovnoměrně mezi dílčí podgrafy; v těchto situacích může výrazně pomoci, pokud jsou dílčí podgrafy s velkým objemem uchovávány v jednom shardu a vše ostatní v jiném, protože toto nastavení zvyšuje pravděpodobnost, že data pro dílčí podgrafu s velkým objemem zůstanou v interní cache db a nebudou nahrazena daty, která nejsou tolik potřebná z dílčích podgrafů s malým objemem.
+> It is generally better make a single database as big as possible, before starting with shards. One exception is where query traffic is split very unevenly between Subgraphs; in those situations it can help dramatically if the high-volume Subgraphs are kept in one shard and everything else in another because that setup makes it more likely that the data for the high-volume Subgraphs stays in the db-internal cache and doesn't get replaced by data that's not needed as much from low-volume Subgraphs.
Pokud jde o konfiguraci připojení, začněte s max_connections v souboru postgresql.conf nastaveným na 400 (nebo možná dokonce 200) a podívejte se na metriky store_connection_wait_time_ms a store_connection_checkout_count Prometheus. Výrazné čekací doby (cokoli nad 5 ms) jsou známkou toho, že je k dispozici příliš málo připojení; vysoké čekací doby tam budou také způsobeny tím, že databáze je velmi vytížená (například vysoké zatížení procesoru). Pokud se však databáze jinak jeví jako stabilní, vysoké čekací doby naznačují potřebu zvýšit počet připojení. V konfiguraci je horní hranicí, kolik připojení může každá instance graf uzlu používat, a graf uzel nebude udržovat otevřená připojení, pokud je nepotřebuje.
@@ -188,7 +188,7 @@ ingestor = "block_ingestor_node"
#### Podpora více sítí
-The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of:
+The Graph Protocol is increasing the number of networks supported for indexing rewards, and there exist many Subgraphs indexing unsupported networks which an indexer would like to process. The `config.toml` file allows for expressive and flexible configuration of:
- Více sítí
- Více poskytovatelů na síť (to může umožnit rozdělení zátěže mezi poskytovatele a také konfiguraci plných uzlů i archivních uzlů, přičemž Graph Node může preferovat levnější poskytovatele, pokud to daná pracovní zátěž umožňuje).
@@ -225,11 +225,11 @@ Uživatelé, kteří provozují škálované nastavení indexování s pokročil
### Správa uzlu graf
-Vzhledem k běžícímu uzlu Graf (nebo uzlům Graf Uzel!) je pak úkolem spravovat rozmístěné podgrafy v těchto uzlech. Graf Uzel nabízí řadu nástrojů, které pomáhají se správou podgrafů.
+Given a running Graph Node (or Graph Nodes!), the challenge is then to manage deployed Subgraphs across those nodes. Graph Node surfaces a range of tools to help with managing Subgraphs.
#### Protokolování
-Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace.
+Graph Node's logs can provide useful information for debugging and optimisation of Graph Node and specific Subgraphs. Graph Node supports different log levels via the `GRAPH_LOG` environment variable, with the following levels: error, warn, info, debug or trace.
In addition setting `GRAPH_LOG_QUERY_TIMING` to `gql` provides more details about how GraphQL queries are running (though this will generate a large volume of logs).
@@ -247,11 +247,11 @@ The graphman command is included in the official containers, and you can docker
Full documentation of `graphman` commands is available in the Graph Node repository. See [/docs/graphman.md] (https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md) in the Graph Node `/docs`
-### Práce s podgrafy
+### Working with Subgraphs
#### Stav indexování API
-API pro stav indexování, které je ve výchozím nastavení dostupné na portu 8030/graphql, nabízí řadu metod pro kontrolu stavu indexování pro různé podgrafy, kontrolu důkazů indexování, kontrolu vlastností podgrafů a další.
+Available on port 8030/graphql by default, the indexing status API exposes a range of methods for checking indexing status for different Subgraphs, checking proofs of indexing, inspecting Subgraph features and more.
The full schema is available [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql).
@@ -263,7 +263,7 @@ Proces indexování má tři samostatné části:
- Zpracování událostí v pořadí pomocí příslušných obslužných (to může zahrnovat volání řetězce pro zjištění stavu a načtení dat z úložiště)
- Zápis výsledných dat do úložiště
-Tyto fáze jsou spojeny do potrubí (tj. mohou být prováděny paralelně), ale jsou na sobě závislé. Pokud se podgrafy indexují pomalu, bude příčina záviset na konkrétním podgrafu.
+These stages are pipelined (i.e. they can be executed in parallel), but they are dependent on one another. Where Subgraphs are slow to index, the underlying cause will depend on the specific Subgraph.
Běžné příčiny pomalého indexování:
@@ -276,24 +276,24 @@ Běžné příčiny pomalého indexování:
- Samotný poskytovatel se dostává za hlavu řetězu
- Pomalé načítání nových účtenek od poskytovatele v hlavě řetězce
-Metriky indexování podgrafů mohou pomoci diagnostikovat hlavní příčinu pomalého indexování. V některých případech spočívá problém v samotném podgrafu, ale v jiných případech mohou zlepšení síťových poskytovatelů, snížení konfliktů v databázi a další zlepšení konfigurace výrazně zlepšit výkon indexování.
+Subgraph indexing metrics can help diagnose the root cause of indexing slowness. In some cases, the problem lies with the Subgraph itself, but in others, improved network providers, reduced database contention and other configuration improvements can markedly improve indexing performance.
-#### Neúspěšné podgrafy
+#### Failed Subgraphs
-Během indexování mohou dílčí graf selhat, pokud narazí na neočekávaná data, pokud některá komponenta nefunguje podle očekávání nebo pokud je chyba ve zpracovatelích událostí nebo v konfiguraci. Existují dva obecné typy selhání:
+During indexing Subgraphs might fail, if they encounter data that is unexpected, some component not working as expected, or if there is some bug in the event handlers or configuration. There are two general types of failure:
- Deterministická selhání: jedná se o selhání, která nebudou vyřešena opakovanými pokusy
- Nedeterministická selhání: mohou být způsobena problémy se zprostředkovatelem nebo neočekávanou chybou grafického uzlu. Pokud dojde k nedeterministickému selhání, uzel Graf zopakuje selhání obsluhy a postupně se vrátí zpět.
-V některých případech může být chyba řešitelná indexátorem (například pokud je chyba důsledkem toho, že není k dispozici správný typ zprostředkovatele, přidání požadovaného zprostředkovatele umožní pokračovat v indexování). V jiných případech je však nutná změna v kódu podgrafu.
+In some cases a failure might be resolvable by the indexer (for example if the error is a result of not having the right kind of provider, adding the required provider will allow indexing to continue). However in others, a change in the Subgraph code is required.
-> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository.
+> Deterministic failures are considered "final", with a Proof of Indexing generated for the failing block, while non-deterministic failures are not, as the Subgraph may manage to "unfail" and continue indexing. In some cases, the non-deterministic label is incorrect, and the Subgraph will never overcome the error; such failures should be reported as issues on the Graph Node repository.
#### Bloková a volací mezipaměť
-Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered subgraph.
+Graph Node caches certain data in the store in order to save refetching from the provider. Blocks are cached, as are the results of `eth_calls` (the latter being cached as of a specific block). This caching can dramatically increase indexing speed during "resyncing" of a slightly altered Subgraph.
-However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected subgraphs, which will then fetch fresh data from the (hopefully) healthy provider.
+However, in some instances, if an Ethereum node has provided incorrect data for some period, that can make its way into the cache, leading to incorrect data or failed Subgraphs. In this case indexers can use `graphman` to clear the poisoned cache, and then rewind the affected Subgraphs, which will then fetch fresh data from the (hopefully) healthy provider.
Pokud existuje podezření na nekonzistenci blokové mezipaměti, například chybějící událost tx receipt:
@@ -304,7 +304,7 @@ Pokud existuje podezření na nekonzistenci blokové mezipaměti, například ch
#### Problémy a chyby při dotazování
-Jakmile je podgraf indexován, lze očekávat, že indexery budou obsluhovat dotazy prostřednictvím koncového bodu vyhrazeného pro dotazy podgrafu. Pokud indexátor doufá, že bude obsluhovat značný objem dotazů, doporučuje se použít vyhrazený uzel pro dotazy a v případě velmi vysokého objemu dotazů mohou indexátory chtít nakonfigurovat oddíly replik tak, aby dotazy neovlivňovaly proces indexování.
+Once a Subgraph has been indexed, indexers can expect to serve queries via the Subgraph's dedicated query endpoint. If the indexer is hoping to serve significant query volume, a dedicated query node is recommended, and in case of very high query volumes, indexers may want to configure replica shards so that queries don't impact the indexing process.
I s vyhrazeným dotazovacím uzlem a replikami však může provádění některých dotazů trvat dlouho a v některých případech může zvýšit využití paměti a negativně ovlivnit dobu dotazování ostatních uživatelů.
@@ -316,7 +316,7 @@ Graph Node caches GraphQL queries by default, which can significantly reduce dat
##### Analýza dotazů
-Problematické dotazy se nejčastěji objevují jedním ze dvou způsobů. V některých případech uživatelé sami hlásí, že daný dotaz je pomalý. V takovém případě je úkolem diagnostikovat příčinu pomalosti - zda se jedná o obecný problém, nebo o specifický problém daného podgrafu či dotazu. A pak ho samozřejmě vyřešit, pokud je to možné.
+Problematic queries most often surface in one of two ways. In some cases, users themselves report that a given query is slow. In that case the challenge is to diagnose the reason for the slowness - whether it is a general issue, or specific to that Subgraph or query. And then of course to resolve it, if possible.
V jiných případech může být spouštěcím faktorem vysoké využití paměti v uzlu dotazu a v takovém případě je třeba nejprve identifikovat dotaz, který problém způsobuje.
@@ -336,10 +336,10 @@ In general, tables where the number of distinct entities are less than 1% of the
Once a table has been determined to be account-like, running `graphman stats account-like .
` will turn on the account-like optimization for queries against that table. The optimization can be turned off again with `graphman stats account-like --clear .
` It takes up to 5 minutes for query nodes to notice that the optimization has been turned on or off. After turning the optimization on, it is necessary to verify that the change does not in fact make queries slower for that table. If you have configured Grafana to monitor Postgres, slow queries would show up in `pg_stat_activity`in large numbers, taking several seconds. In that case, the optimization needs to be turned off again.
-For Uniswap-like subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load.
+For Uniswap-like Subgraphs, the `pair` and `token` tables are prime candidates for this optimization, and can have a dramatic effect on database load.
-#### Odstranění podgrafů
+#### Removing Subgraphs
> Jedná se o novou funkci, která bude k dispozici v uzlu Graf 0.29.x
-At some point an indexer might want to remove a given subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
+At some point an indexer might want to remove a given Subgraph. This can be easily done via `graphman drop`, which deletes a deployment and all it's indexed data. The deployment can be specified as either a Subgraph name, an IPFS hash `Qm..`, or the database namespace `sgdNNN`. Further documentation is available [here](https://github.com/graphprotocol/graph-node/blob/master/docs/graphman.md#-drop).
diff --git a/website/src/pages/cs/indexing/tooling/graphcast.mdx b/website/src/pages/cs/indexing/tooling/graphcast.mdx
index aec7d84070c3..5aa86adcc8da 100644
--- a/website/src/pages/cs/indexing/tooling/graphcast.mdx
+++ b/website/src/pages/cs/indexing/tooling/graphcast.mdx
@@ -10,10 +10,10 @@ V současné době jsou náklady na vysílání informací ostatním účastník
Graphcast SDK (Vývoj softwaru Kit) umožňuje vývojářům vytvářet rádia, což jsou aplikace napájené drby, které mohou indexery spouštět k danému účelu. Máme také v úmyslu vytvořit několik Radios (nebo poskytnout podporu jiným vývojářům/týmům, které chtějí Radios vytvořit) pro následující případy použití:
-- Křížová kontrola integrity dat subgrafu v reálném čase ([Podgraf Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)).
-- Provádění aukcí a koordinace pro warp synchronizaci podgrafů, substreamů a dat Firehose z jiných Indexerů.
-- Vlastní hlášení o analýze aktivních dotazů, včetně objemů požadavků na dílčí grafy, objemů poplatků atd.
-- Vlastní hlášení o analýze indexování, včetně času indexování podgrafů, nákladů na plyn obsluhy, zjištěných chyb indexování atd.
+- Real-time cross-checking of Subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)).
+- Conducting auctions and coordination for warp syncing Subgraphs, substreams, and Firehose data from other Indexers.
+- Self-reporting on active query analytics, including Subgraph request volumes, fee volumes, etc.
+- Self-reporting on indexing analytics, including Subgraph indexing time, handler gas costs, indexing errors encountered, etc.
- Vlastní hlášení informací o zásobníku včetně verze grafového uzlu, verze Postgres, verze klienta Ethereum atd.
### Dozvědět se více
diff --git a/website/src/pages/cs/resources/benefits.mdx b/website/src/pages/cs/resources/benefits.mdx
index e18158242265..c0c0031d3f7b 100644
--- a/website/src/pages/cs/resources/benefits.mdx
+++ b/website/src/pages/cs/resources/benefits.mdx
@@ -27,47 +27,47 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar
## Low Volume User (less than 100,000 queries per month)
-| Srovnání nákladů | Vlastní hostitel | The Graph Network |
-| :-: | :-: | :-: |
-| Měsíční náklady na server\* | $350 měsíčně | $0 |
-| Náklady na dotazování | $0+ | $0 per month |
-| Inženýrský čas† | $400 měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery |
-| Dotazy za měsíc | Omezeno na infra schopnosti | 100,000 (Free Plan) |
-| Náklady na jeden dotaz | $0 | $0‡ |
-| Infrastructure | Centralizovaný | Decentralizované |
-| Geografická redundancy | $750+ Usd za další uzel | Zahrnuto |
-| Provozuschopnost | Různé | 99.9%+ |
-| Celkové měsíční náklady | $750+ | $0 |
+| Srovnání nákladů | Vlastní hostitel | The Graph Network |
+| :-------------------------: | :-------------------------------------: | :-----------------------------------------------------------: |
+| Měsíční náklady na server\* | $350 měsíčně | $0 |
+| Náklady na dotazování | $0+ | $0 per month |
+| Inženýrský čas† | $400 měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery |
+| Dotazy za měsíc | Omezeno na infra schopnosti | 100,000 (Free Plan) |
+| Náklady na jeden dotaz | $0 | $0‡ |
+| Infrastructure | Centralizovaný | Decentralizované |
+| Geografická redundancy | $750+ Usd za další uzel | Zahrnuto |
+| Provozuschopnost | Různé | 99.9%+ |
+| Celkové měsíční náklady | $750+ | $0 |
## Medium Volume User (~3M queries per month)
-| Srovnání nákladů | Vlastní hostitel | The Graph Network |
-| :-: | :-: | :-: |
-| Měsíční náklady na server\* | $350 měsíčně | $0 |
-| Náklady na dotazování | $500 měsíčně | $120 per month |
-| Inženýrský čas† | $800 měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery |
-| Dotazy za měsíc | Omezeno na infra schopnosti | ~3,000,000 |
-| Náklady na jeden dotaz | $0 | $0.00004 |
-| Infrastructure | Centralizovaný | Decentralizované |
-| Výdaje inženýrskou | $200 za hodinu | Zahrnuto |
-| Geografická redundancy | $1,200 celkových nákladů na další uzel | Zahrnuto |
-| Provozuschopnost | Různé | 99.9%+ |
-| Celkové měsíční náklady | $1,650+ | $120 |
+| Srovnání nákladů | Vlastní hostitel | The Graph Network |
+| :-------------------------: | :----------------------------------------: | :-----------------------------------------------------------: |
+| Měsíční náklady na server\* | $350 měsíčně | $0 |
+| Náklady na dotazování | $500 měsíčně | $120 per month |
+| Inženýrský čas† | $800 měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery |
+| Dotazy za měsíc | Omezeno na infra schopnosti | ~3,000,000 |
+| Náklady na jeden dotaz | $0 | $0.00004 |
+| Infrastructure | Centralizovaný | Decentralizované |
+| Výdaje inženýrskou | $200 za hodinu | Zahrnuto |
+| Geografická redundancy | $1,200 celkových nákladů na další uzel | Zahrnuto |
+| Provozuschopnost | Různé | 99.9%+ |
+| Celkové měsíční náklady | $1,650+ | $120 |
## High Volume User (~30M queries per month)
-| Srovnání nákladů | Vlastní hostitel | The Graph Network |
-| :-: | :-: | :-: |
-| Měsíční náklady na server\* | $1100 měsíčně za uzel | $0 |
-| Náklady na dotazování | $4000 | $1,200 per month |
-| Počet potřebných uzlů | 10 | Nepoužije se |
-| Inženýrský čas† | 6$, 000 nebo více měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery |
-| Dotazy za měsíc | Omezeno na infra schopnosti | ~30,000,000 |
-| Náklady na jeden dotaz | $0 | $0.00004 |
-| Infrastructure | Centralizovaný | Decentralizované |
-| Geografická redundancy | $1,200 celkových nákladů na další uzel | Zahrnuto |
-| Provozuschopnost | Různé | 99.9%+ |
-| Celkové měsíční náklady | $11,000+ | $1,200 |
+| Srovnání nákladů | Vlastní hostitel | The Graph Network |
+| :-------------------------: | :-----------------------------------------: | :-----------------------------------------------------------: |
+| Měsíční náklady na server\* | $1100 měsíčně za uzel | $0 |
+| Náklady na dotazování | $4000 | $1,200 per month |
+| Počet potřebných uzlů | 10 | Nepoužije se |
+| Inženýrský čas† | 6$, 000 nebo více měsíčně | Žádné, zabudované do sítě s globálně distribuovanými indexery |
+| Dotazy za měsíc | Omezeno na infra schopnosti | ~30,000,000 |
+| Náklady na jeden dotaz | $0 | $0.00004 |
+| Infrastructure | Centralizovaný | Decentralizované |
+| Geografická redundancy | $1,200 celkových nákladů na další uzel | Zahrnuto |
+| Provozuschopnost | Různé | 99.9%+ |
+| Celkové měsíční náklady | $11,000+ | $1,200 |
\*včetně nákladů na zálohování: $50-$100 měsíčně
@@ -75,9 +75,9 @@ Query costs may vary; the quoted cost is the average at time of publication (Mar
‡Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries.
-Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet.
+Estimated costs are only for Ethereum Mainnet Subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their Subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/archived/arbitrum/arbitrum-faq/) are substantially lower than Ethereum mainnet.
-Kurátorování signálu na podgrafu je volitelný jednorázový čistý nulový náklad (např. na podgrafu lze kurátorovat signál v hodnotě $1k a později jej stáhnout - s potenciálem získat v tomto procesu výnosy).
+Curating signal on a Subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a Subgraph, and later withdrawn—with potential to earn returns in the process).
## No Setup Costs & Greater Operational Efficiency
@@ -89,4 +89,4 @@ The Graph’s decentralized network gives users access to geographic redundancy
Bottom line: The Graph Network is less expensive, easier to use, and produces superior results compared to running a `graph-node` locally.
-Start using The Graph Network today, and learn how to [publish your subgraph to The Graph's decentralized network](/subgraphs/quick-start/).
+Start using The Graph Network today, and learn how to [publish your Subgraph to The Graph's decentralized network](/subgraphs/quick-start/).
diff --git a/website/src/pages/cs/resources/glossary.mdx b/website/src/pages/cs/resources/glossary.mdx
index 70161f581585..49fd1f60c539 100644
--- a/website/src/pages/cs/resources/glossary.mdx
+++ b/website/src/pages/cs/resources/glossary.mdx
@@ -4,51 +4,51 @@ title: Glosář
- **The Graph**: A decentralized protocol for indexing and querying data.
-- **Query**: A request for data. In the case of The Graph, a query is a request for data from a subgraph that will be answered by an Indexer.
+- **Query**: A request for data. In the case of The Graph, a query is a request for data from a Subgraph that will be answered by an Indexer.
-- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query subgraphs.
+- **GraphQL**: A query language for APIs and a runtime for fulfilling those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
-- **Endpoint**: A URL that can be used to query a subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query subgraphs on The Graph's decentralized network.
+- **Endpoint**: A URL that can be used to query a Subgraph. The testing endpoint for Subgraph Studio is `https://api.studio.thegraph.com/query///` and the Graph Explorer endpoint is `https://gateway.thegraph.com/api//subgraphs/id/`. The Graph Explorer endpoint is used to query Subgraphs on The Graph's decentralized network.
-- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish subgraphs to The Graph Network. Once it is indexed, the subgraph can be queried by anyone.
+- **Subgraph**: An open API that extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL. Developers can build, deploy, and publish Subgraphs to The Graph Network. Once it is indexed, the Subgraph can be queried by anyone.
- **Indexer**: Network participants that run indexing nodes to index data from blockchains and serve GraphQL queries.
- **Indexer Revenue Streams**: Indexers are rewarded in GRT with two components: query fee rebates and indexing rewards.
- 1. **Query Fee Rebates**: Payments from subgraph consumers for serving queries on the network.
+ 1. **Query Fee Rebates**: Payments from Subgraph consumers for serving queries on the network.
- 2. **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually.
+ 2. **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are generated via new issuance of 3% GRT annually.
- **Indexer's Self-Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit.
- **Delegation Capacity**: The maximum amount of GRT an Indexer can accept from Delegators. Indexers can only accept up to 16x their Indexer Self-Stake, and additional delegation results in diluted rewards. For example, if an Indexer has a Self-Stake of 1M GRT, their delegation capacity is 16M. However, Indexers can increase their Delegation Capacity by increasing their Self-Stake.
-- **Upgrade Indexer**: An Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers.
+- **Upgrade Indexer**: An Indexer designed to act as a fallback for Subgraph queries not serviced by other Indexers on the network. The upgrade Indexer is not competitive with other Indexers.
-- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs.
+- **Delegator**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in Subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing Subgraphs.
- **Delegation Tax**: A 0.5% fee paid by Delegators when they delegate GRT to Indexers. The GRT used to pay the fee is burned.
-- **Curator**: Network participants that identify high-quality subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a subgraph, 10% is distributed to the Curators of that subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a subgraph.
+- **Curator**: Network participants that identify high-quality Subgraphs, and signal GRT on them in exchange for curation shares. When Indexers claim query fees on a Subgraph, 10% is distributed to the Curators of that Subgraph. There is a positive correlation between the amount of GRT signaled and the number of Indexers indexing a Subgraph.
-- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on subgraphs. The GRT used to pay the fee is burned.
+- **Curation Tax**: A 1% fee paid by Curators when they signal GRT on Subgraphs. The GRT used to pay the fee is burned.
-- **Data Consumer**: Any application or user that queries a subgraph.
+- **Data Consumer**: Any application or user that queries a Subgraph.
-- **Subgraph Developer**: A developer who builds and deploys a subgraph to The Graph's decentralized network.
+- **Subgraph Developer**: A developer who builds and deploys a Subgraph to The Graph's decentralized network.
-- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example.
+- **Subgraph Manifest**: A YAML file that describes the Subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example.
- **Epoch**: A unit of time within the network. Currently, one epoch is 6,646 blocks or approximately 1 day.
-- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses:
+- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards Subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses:
- 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated.
+ 1. **Active**: An allocation is considered active when it is created onchain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular Subgraph. Active allocations accrue indexing rewards proportional to the signal on the Subgraph, and the amount of GRT allocated.
- 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data.
+ 2. **Closed**: An Indexer may claim the accrued indexing rewards on a given Subgraph by submitting a recent, and valid, Proof of Indexing (POI). This is known as closing an allocation. An allocation must have been open for a minimum of one epoch before it can be closed. The maximum allocation period is 28 epochs. If an Indexer leaves an allocation open beyond 28 epochs, it is known as a stale allocation. When an allocation is in the **Closed** state, a Fisherman can still open a dispute to challenge an Indexer for serving false data.
-- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing subgraphs.
+- **Subgraph Studio**: A powerful dapp for building, deploying, and publishing Subgraphs.
- **Fishermen**: A role within The Graph Network held by participants who monitor the accuracy and integrity of data served by Indexers. When a Fisherman identifies a query response or a POI they believe to be incorrect, they can initiate a dispute against the Indexer. If the dispute rules in favor of the Fisherman, the Indexer is slashed by losing 2.5% of their self-stake. Of this amount, 50% is awarded to the Fisherman as a bounty for their vigilance, and the remaining 50% is removed from circulation (burned). This mechanism is designed to encourage Fishermen to help maintain the reliability of the network by ensuring that Indexers are held accountable for the data they provide.
@@ -56,28 +56,28 @@ title: Glosář
- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self-stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned.
-- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT.
+- **Indexing Rewards**: The rewards that Indexers receive for indexing Subgraphs. Indexing rewards are distributed in GRT.
- **Delegation Rewards**: The rewards that Delegators receive for delegating GRT to Indexers. Delegation rewards are distributed in GRT.
- **GRT**: The Graph's work utility token. GRT provides economic incentives to network participants for contributing to the network.
-- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer.
+- **Proof of Indexing (POI)**: When an Indexer closes their allocation and wants to claim their accrued indexing rewards on a given Subgraph, they must provide a valid and recent POI. Fishermen may dispute the POI provided by an Indexer. A dispute resolved in the Fisherman's favor will result in slashing of the Indexer.
-- **Graph Node**: Graph Node is the component that indexes subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer.
+- **Graph Node**: Graph Node is the component that indexes Subgraphs and makes the resulting data available to query via a GraphQL API. As such it is central to the Indexer stack, and correct operation of Graph Node is crucial to running a successful Indexer.
-- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing subgraph deployments to its Graph Node(s), and managing allocations.
+- **Indexer agent**: The Indexer agent is part of the Indexer stack. It facilitates the Indexer's interactions onchain, including registering on the network, managing Subgraph deployments to its Graph Node(s), and managing allocations.
- **The Graph Client**: A library for building GraphQL-based dapps in a decentralized way.
-- **Graph Explorer**: A dapp designed for network participants to explore subgraphs and interact with the protocol.
+- **Graph Explorer**: A dapp designed for network participants to explore Subgraphs and interact with the protocol.
- **Graph CLI**: A command line interface tool for building and deploying to The Graph.
- **Cooldown Period**: The time remaining until an Indexer who changed their delegation parameters can do so again.
-- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, subgraphs, curation shares, and Indexer's self-stake.
+- **L2 Transfer Tools**: Smart contracts and UI that enable network participants to transfer network related assets from Ethereum mainnet to Arbitrum One. Network participants can transfer delegated GRT, Subgraphs, curation shares, and Indexer's self-stake.
-- **Updating a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings.
+- **Updating a Subgraph**: The process of releasing a new Subgraph version with updates to the Subgraph's manifest, schema, or mappings.
-- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2).
+- **Migrating**: The process of curation shares moving from an old version of a Subgraph to a new version of a Subgraph (e.g. when v0.0.1 is updated to v0.0.2).
diff --git a/website/src/pages/cs/resources/migration-guides/assemblyscript-migration-guide.mdx b/website/src/pages/cs/resources/migration-guides/assemblyscript-migration-guide.mdx
index 756873dd8fbb..8af6d2817679 100644
--- a/website/src/pages/cs/resources/migration-guides/assemblyscript-migration-guide.mdx
+++ b/website/src/pages/cs/resources/migration-guides/assemblyscript-migration-guide.mdx
@@ -2,13 +2,13 @@
title: Průvodce migrací AssemblyScript
---
-Up until now, subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉
+Up until now, Subgraphs have been using one of the [first versions of AssemblyScript](https://github.com/AssemblyScript/assemblyscript/tree/v0.6) (v0.6). Finally we've added support for the [newest one available](https://github.com/AssemblyScript/assemblyscript/tree/v0.19.10) (v0.19.10)! 🎉
-To umožní vývojářům podgrafů používat novější funkce jazyka AS a standardní knihovny.
+That will enable Subgraph developers to use newer features of the AS language and standard library.
This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `0.22.0`. If you're already at a higher than (or equal) version to that, you've already been using version `0.19.10` of AssemblyScript 🙂
-> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the subgraph manifest.
+> Note: As of `0.24.0`, `graph-node` can support both versions, depending on the `apiVersion` specified in the Subgraph manifest.
## Funkce
@@ -44,7 +44,7 @@ This guide is applicable for anyone using `graph-cli`/`graph-ts` below version `
## Jak provést upgrade?
-1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.6`:
+1. Change your mappings `apiVersion` in `subgraph.yaml` to `0.0.9`:
```yaml
...
@@ -52,7 +52,7 @@ dataSources:
...
mapping:
...
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
...
```
@@ -106,7 +106,7 @@ let maybeValue = load()! // breaks in runtime if value is null
maybeValue.aMethod()
```
-Pokud si nejste jisti, kterou verzi zvolit, doporučujeme vždy použít bezpečnou verzi. Pokud hodnota neexistuje, možná budete chtít provést pouze časný příkaz if s návratem v obsluze podgrafu.
+If you are unsure which to choose, we recommend always using the safe version. If the value doesn't exist you might want to just do an early if statement with a return in you Subgraph handler.
### Proměnlivé stínování
@@ -132,7 +132,7 @@ Pokud jste použili stínování proměnných, musíte duplicitní proměnné p
### Nulová srovnání
-Při aktualizaci podgrafu může někdy dojít k těmto chybám:
+By doing the upgrade on your Subgraph, sometimes you might get errors like these:
```typescript
ERROR TS2322: Type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt | null' is not assignable to type '~lib/@graphprotocol/graph-ts/common/numbers/BigInt'.
@@ -330,7 +330,7 @@ let wrapper = new Wrapper(y)
wrapper.n = wrapper.n + x // doesn't give compile time errors as it should
```
-Otevřeli jsme kvůli tomu problém v kompilátoru jazyka AssemblyScript, ale zatím platí, že pokud provádíte tyto operace v mapování podgrafů, měli byste je změnit tak, aby se před nimi provedla kontrola null.
+We've opened a issue on the AssemblyScript compiler for this, but for now if you do these kind of operations in your Subgraph mappings, you should change them to do a null check before it.
```typescript
let wrapper = new Wrapper(y)
@@ -352,7 +352,7 @@ value.x = 10
value.y = 'content'
```
-Zkompiluje se, ale za běhu se přeruší, což se stane, protože hodnota nebyla inicializována, takže se ujistěte, že váš podgraf inicializoval své hodnoty, například takto:
+It will compile but break at runtime, that happens because the value hasn't been initialized, so make sure your Subgraph has initialized their values, like this:
```typescript
var value = new Type() // initialized
diff --git a/website/src/pages/cs/resources/migration-guides/graphql-validations-migration-guide.mdx b/website/src/pages/cs/resources/migration-guides/graphql-validations-migration-guide.mdx
index 7f273724aff4..4051faab8eef 100644
--- a/website/src/pages/cs/resources/migration-guides/graphql-validations-migration-guide.mdx
+++ b/website/src/pages/cs/resources/migration-guides/graphql-validations-migration-guide.mdx
@@ -1,5 +1,5 @@
---
-title: Průvodce migrací na GraphQL Validace
+title: GraphQL Validations Migration Guide
---
Brzy bude `graph-node` podporovat 100% pokrytí [GraphQL Validations specifikace](https://spec.graphql.org/June2018/#sec-Validation).
@@ -20,7 +20,7 @@ Chcete-li být v souladu s těmito validacemi, postupujte podle průvodce migrac
Pomocí migračního nástroje CLI můžete najít případné problémy v operacích GraphQL a opravit je. Případně můžete aktualizovat koncový bod svého klienta GraphQL tak, aby používal koncový bod `https://api-next.thegraph.com/subgraphs/name/$GITHUB_USER/$SUBGRAPH_NAME`. Testování dotazů proti tomuto koncovému bodu vám pomůže najít problémy ve vašich dotazech.
-> Není nutné migrovat všechny podgrafy, pokud používáte [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) nebo [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), ty již zajistí, že vaše dotazy jsou platné.
+> Not all Subgraphs will need to be migrated, if you are using [GraphQL ESlint](https://the-guild.dev/graphql/eslint/docs) or [GraphQL Code Generator](https://the-guild.dev/graphql/codegen), they already ensure that your queries are valid.
## Migrační nástroj CLI
diff --git a/website/src/pages/cs/resources/roles/curating.mdx b/website/src/pages/cs/resources/roles/curating.mdx
index c8b9caf18e2e..f06866a7c0ee 100644
--- a/website/src/pages/cs/resources/roles/curating.mdx
+++ b/website/src/pages/cs/resources/roles/curating.mdx
@@ -2,37 +2,37 @@
title: Kurátorování
---
-Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index.
+Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the Subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality Subgraphs with a share of the query fees those Subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which Subgraphs to index.
## What Does Signaling Mean for The Graph Network?
-Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed.
+Before consumers can query a Subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality Subgraphs, they need to know what Subgraphs to index. When Curators signal on a Subgraph, it lets Indexers know that a Subgraph is in demand and of sufficient quality that it should be indexed.
-Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives.
+Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that Curators use to let Indexers know that a Subgraph is good to index. Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the Subgraph, entitling them to a portion of future query fees that the Subgraph drives.
-Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them.
+Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to Subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality Subgraph because there will be fewer queries to process or fewer Indexers to process them.
-The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
+The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
-When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
+When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
-If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with.
+If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the Subgraphs you need assistance with.
-Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below).
+Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer (screenshot below).
-
+
## Jak signalizovat
-Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/)
+Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain Subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/subgraphs/explorer/)
-Kurátor si může zvolit, zda bude signalizovat na konkrétní verzi podgrafu, nebo zda se jeho signál automaticky přenese na nejnovější produkční sestavení daného podgrafu. Obě strategie jsou platné a mají své výhody i nevýhody.
+A curator can choose to signal on a specific Subgraph version, or they can choose to have their signal automatically migrate to the newest production build of that Subgraph. Both are valid strategies and come with their own pros and cons.
-Signaling on a specific version is especially useful when one subgraph is used by multiple dapps. One dapp might need to regularly update the subgraph with new features. Another dapp might prefer to use an older, well-tested subgraph version. Upon initial curation, a 1% standard tax is incurred.
+Signaling on a specific version is especially useful when one Subgraph is used by multiple dapps. One dapp might need to regularly update the Subgraph with new features. Another dapp might prefer to use an older, well-tested Subgraph version. Upon initial curation, a 1% standard tax is incurred.
Automatická migrace signálu na nejnovější produkční sestavení může být cenná, protože zajistí, že se poplatky za dotazy budou neustále zvyšovat. Při každém kurátorství se platí 1% kurátorský poplatek. Při každé migraci také zaplatíte 0,5% kurátorskou daň. Vývojáři podgrafu jsou odrazováni od častého publikování nových verzí - musí zaplatit 0.5% kurátorskou daň ze všech automaticky migrovaných kurátorských podílů.
-> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy.
+> **Note**: The first address to signal a particular Subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, and also transfers tokens into The Graph proxy.
## Withdrawing your GRT
@@ -40,39 +40,39 @@ Curators have the option to withdraw their signaled GRT at any time.
Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax).
-Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled.
+Once a curator withdraws their signal, indexers may choose to keep indexing the Subgraph, even if there's currently no active GRT signaled.
-However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph.
+However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the Subgraph.
## Rizika
1. Trh s dotazy je v Graf ze své podstaty mladý a existuje riziko, že vaše %APY může být nižší, než očekáváte, v důsledku dynamiky rodícího se trhu.
-2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned.
-3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/).
-4. Podgraf může selhat kvůli chybě. Za neúspěšný podgraf se neúčtují poplatky za dotaz. V důsledku toho budete muset počkat, až vývojář chybu opraví a nasadí novou verzi.
- - Pokud jste přihlášeni k odběru nejnovější verze podgrafu, vaše sdílené položky se automaticky přemigrují na tuto novou verzi. Při tom bude účtována 0,5% kurátorská daň.
- - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax.
+2. Curation Fee - when a curator signals GRT on a Subgraph, they incur a 1% curation tax. This fee is burned.
+3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dapp developer stops versioning/improving and querying their Subgraph or if a Subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/resources/roles/delegating/delegating/).
+4. A Subgraph can fail due to a bug. A failed Subgraph does not accrue query fees. As a result, you’ll have to wait until the developer fixes the bug and deploys a new version.
+ - If you are subscribed to the newest version of a Subgraph, your shares will auto-migrate to that new version. This will incur a 0.5% curation tax.
+ - If you have signaled on a specific Subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new Subgraph version, thus incurring a 1% curation tax.
## Nejčastější dotazy ke kurátorství
### 1. Kolik % z poplatků za dotazy kurátoři vydělávají?
-By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance.
+By signalling on a Subgraph, you will earn a share of all the query fees that the Subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance.
-### 2. Jak se rozhodnu, které podgrafy jsou kvalitní a na kterých je třeba signalizovat?
+### 2. How do I decide which Subgraphs are high quality to signal on?
-Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result:
+Finding high-quality Subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy Subgraphs that are driving query volume. A trustworthy Subgraph may be valuable if it is complete, accurate, and supports a dapp’s data needs. A poorly architected Subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a Subgraph’s architecture or code in order to assess if a Subgraph is valuable. As a result:
-- Curators can use their understanding of a network to try and predict how an individual subgraph may generate a higher or lower query volume in the future
-- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on.
+- Curators can use their understanding of a network to try and predict how an individual Subgraph may generate a higher or lower query volume in the future
+- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the Subgraph developer is can help determine whether or not a Subgraph is worth signalling on.
-### 3. Jaké jsou náklady na aktualizaci podgrafu?
+### 3. What’s the cost of updating a Subgraph?
-Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an onchain action that costs gas.
+Migrating your curation shares to a new Subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a Subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading Subgraphs is an onchain action that costs gas.
-### 4. Jak často mohu svůj podgraf aktualizovat?
+### 4. How often can I update my Subgraph?
-Doporučujeme, abyste podgrafy neaktualizovali příliš často. Další podrobnosti naleznete v otázce výše.
+It’s suggested that you don’t update your Subgraphs too frequently. See the question above for more details.
### 5. Mohu prodat své kurátorské podíly?
diff --git a/website/src/pages/cs/resources/roles/delegating/undelegating.mdx b/website/src/pages/cs/resources/roles/delegating/undelegating.mdx
index 071253821e63..bc98d6aeff17 100644
--- a/website/src/pages/cs/resources/roles/delegating/undelegating.mdx
+++ b/website/src/pages/cs/resources/roles/delegating/undelegating.mdx
@@ -13,13 +13,11 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the
1. Visit [Graph Explorer](https://thegraph.com/explorer). Please make sure you're on Explorer and **not** Subgraph Studio.
2. Click on your profile. You can find it on the top right corner of the page.
-
- Make sure that your wallet is connected. If it's not connected, you will see the "connect" button instead.
3. Once you're in your profile, click on the Delegating tab. In the Delegating tab, you can view the list of Indexers you have delegated to.
4. Click on the Indexer from which you wish to withdraw your tokens.
-
- Make sure to note the specific Indexer, as you will need to find them again to withdraw.
5. Select the "Undelegate" option by clicking on the three dots next to the Indexer on the right side, see image below:
@@ -37,11 +35,9 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the
### Step-by-Step
1. Find your delegation transaction on Arbiscan.
-
- Here's an [example transaction on Arbiscan](https://arbiscan.io/tx/0xcf2110eac897099f821064445041031efb32786392bdbe7544a4cb7a6b2e4f9a)
2. Navigate to "Transaction Action" where you can find the staking extension contract:
-
- [This is the staking extension contract for the example listed above](https://arbiscan.io/address/0x00669A4CF01450B64E8A2A20E9b1FCB71E61eF03)
3. Then click on "Contract". 
@@ -66,7 +62,7 @@ Learn how to withdraw your delegated tokens through [Graph Explorer](https://the
11. You can see how much you will have available to withdraw by calling the `getWithdrawableDelegatedTokens` on Read Custom and passing it your delegation tuple. See screenshot below:
- 
+ 
## Další zdroje
diff --git a/website/src/pages/cs/resources/subgraph-studio-faq.mdx b/website/src/pages/cs/resources/subgraph-studio-faq.mdx
index a67af0f6505e..1f036fb46484 100644
--- a/website/src/pages/cs/resources/subgraph-studio-faq.mdx
+++ b/website/src/pages/cs/resources/subgraph-studio-faq.mdx
@@ -4,7 +4,7 @@ title: FAQs Podgraf Studio
## 1. Co je Podgraf Studio?
-[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing subgraphs and API keys.
+[Subgraph Studio](https://thegraph.com/studio/) is a dapp for creating, managing, and publishing Subgraphs and API keys.
## 2. Jak vytvořím klíč API?
@@ -18,14 +18,14 @@ Yes! You can create multiple API Keys to use in different projects. Check out th
Po vytvoření klíče API můžete v části Zabezpečení definovat domény, které se mohou dotazovat na konkrétní klíč API.
-## 5. Mohu svůj podgraf převést na jiného vlastníka?
+## 5. Can I transfer my Subgraph to another owner?
-Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'.
+Yes, Subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the Subgraph's details page and selecting 'Transfer ownership'.
-Všimněte si, že po přenesení podgrafu jej již nebudete moci ve Studio zobrazit ani upravovat.
+Note that you will no longer be able to see or edit the Subgraph in Studio once it has been transferred.
-## 6. Jak najdu adresy URL dotazů pro podgrafy, pokud nejsem Vývojář podgrafu, který chci použít?
+## 6. How do I find query URLs for Subgraphs if I’m not the developer of the Subgraph I want to use?
-You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio.
+You can find the query URL of each Subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the Subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio.
-Nezapomeňte, že si můžete vytvořit klíč API a dotazovat se na libovolný podgraf zveřejněný v síti, i když si podgraf vytvoříte sami. Tyto dotazy prostřednictvím nového klíče API jsou placené dotazy jako jakékoli jiné v síti.
+Remember that you can create an API key and query any Subgraph published to the network, even if you build a Subgraph yourself. These queries via the new API key, are paid queries as any other on the network.
diff --git a/website/src/pages/cs/resources/tokenomics.mdx b/website/src/pages/cs/resources/tokenomics.mdx
index 92b1514574b4..66eefd5b8b1a 100644
--- a/website/src/pages/cs/resources/tokenomics.mdx
+++ b/website/src/pages/cs/resources/tokenomics.mdx
@@ -6,7 +6,7 @@ description: The Graph Network is incentivized by powerful tokenomics. Here’s
## Přehled
-The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph.
+The Graph is a decentralized protocol that enables easy access to blockchain data. It indexes blockchain data similarly to how Google indexes the web. If you've used a dapp that retrieves data from a Subgraph, you've probably interacted with The Graph. Today, thousands of [popular dapps](https://thegraph.com/explorer) in the web3 ecosystem use The Graph.
## Specifics
@@ -24,9 +24,9 @@ There are four primary network participants:
1. Delegators - Delegate GRT to Indexers & secure the network
-2. Kurátoři - nalezení nejlepších podgrafů pro indexátory
+2. Curators - Find the best Subgraphs for Indexers
-3. Developers - Build & query subgraphs
+3. Developers - Build & query Subgraphs
4. Indexery - páteř blockchainových dat
@@ -36,7 +36,7 @@ Fishermen and Arbitrators are also integral to the network's success through oth
## Delegators (Passively earn GRT)
-Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually.
+Indexers are delegated GRT by Delegators, increasing the Indexer’s stake in Subgraphs on the network. In return, Delegators earn a percentage of all query fees and indexing rewards from the Indexer. Each Indexer sets the cut that will be rewarded to Delegators independently, creating competition among Indexers to attract Delegators. Most Indexers offer between 9-12% annually.
For example, if a Delegator were to delegate 15k GRT to an Indexer offering 10%, the Delegator would receive ~1,500 GRT in rewards annually.
@@ -46,25 +46,25 @@ If you're reading this, you're capable of becoming a Delegator right now by head
## Curators (Earn GRT)
-Curators identify high-quality subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the subgraph. While any independent network participant can be a Curator, typically subgraph developers are among the first Curators for their own subgraphs because they want to ensure their subgraph is indexed.
+Curators identify high-quality Subgraphs and "curate" them (i.e., signal GRT on them) to earn curation shares, which guarantee a percentage of all future query fees generated by the Subgraph. While any independent network participant can be a Curator, typically Subgraph developers are among the first Curators for their own Subgraphs because they want to ensure their Subgraph is indexed.
-Subgraph developers are encouraged to curate their subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation.
+Subgraph developers are encouraged to curate their Subgraph with at least 3,000 GRT. However, this number may be impacted by network activity and community participation.
-Curators pay a 1% curation tax when they curate a new subgraph. This curation tax is burned, decreasing the supply of GRT.
+Curators pay a 1% curation tax when they curate a new Subgraph. This curation tax is burned, decreasing the supply of GRT.
## Developers
-Developers build and query subgraphs to retrieve blockchain data. Since subgraphs are open source, developers can query existing subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants.
+Developers build and query Subgraphs to retrieve blockchain data. Since Subgraphs are open source, developers can query existing Subgraphs to load blockchain data into their dapps. Developers pay for queries they make in GRT, which is distributed to network participants.
-### Vytvoření podgrafu
+### Creating a Subgraph
-Developers can [create a subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers.
+Developers can [create a Subgraph](/developing/creating-a-subgraph/) to index data on the blockchain. Subgraphs are instructions for Indexers about which data should be served to consumers.
-Once developers have built and tested their subgraph, they can [publish their subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network.
+Once developers have built and tested their Subgraph, they can [publish their Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/) on The Graph's decentralized network.
-### Dotazování na existující podgraf
+### Querying an existing Subgraph
-Once a subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the subgraph.
+Once a Subgraph is [published](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph's decentralized network, anyone can create an API key, add GRT to their billing balance, and query the Subgraph.
Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and the query fees are paid for with GRT in [Subgraph Studio](https://thegraph.com/studio/). Query fees are distributed to network participants based on their contributions to the protocol.
@@ -72,27 +72,27 @@ Subgraphs are [queried using GraphQL](/subgraphs/querying/introduction/), and th
## Indexers (Earn GRT)
-Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from subgraphs.
+Indexers are the backbone of The Graph. They operate independent hardware and software powering The Graph’s decentralized network. Indexers serve data to consumers based on instructions from Subgraphs.
Indexers can earn GRT rewards in two ways:
-1. **Query fees**: GRT paid by developers or users for subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)).
+1. **Query fees**: GRT paid by developers or users for Subgraph data queries. Query fees are directly distributed to Indexers according to the exponential rebate function (see GIP [here](https://forum.thegraph.com/t/gip-0051-exponential-query-fee-rebates-for-indexers/4162)).
-2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of subgraphs they are indexing. These rewards incentivize Indexers to index subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately.
+2. **Indexing rewards**: the 3% annual issuance is distributed to Indexers based on the number of Subgraphs they are indexing. These rewards incentivize Indexers to index Subgraphs, occasionally before the query fees begin, to accrue and submit Proofs of Indexing (POIs), verifying that they have indexed data accurately.
-Each subgraph is allotted a portion of the total network token issuance, based on the amount of the subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the subgraph.
+Each Subgraph is allotted a portion of the total network token issuance, based on the amount of the Subgraph’s curation signal. That amount is then rewarded to Indexers based on their allocated stake on the Subgraph.
In order to run an indexing node, Indexers must self-stake 100,000 GRT or more with the network. Indexers are incentivized to self-stake GRT in proportion to the amount of queries they serve.
-Indexers can increase their GRT allocations on subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network.
+Indexers can increase their GRT allocations on Subgraphs by accepting GRT delegation from Delegators, and they can accept up to 16 times their initial self-stake. If an Indexer becomes "over-delegated" (i.e., more than 16 times their initial self-stake), they will not be able to use the additional GRT from Delegators until they increase their self-stake in the network.
The amount of rewards an Indexer receives can vary based on the Indexer's self-stake, accepted delegation, quality of service, and many more factors.
## Token Supply: Burning & Issuance
-The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network.
+The initial token supply is 10 billion GRT, with a target of 3% new issuance annually to reward Indexers for allocating stake on Subgraphs. This means that the total supply of GRT tokens will increase by 3% each year as new tokens are issued to Indexers for their contribution to the network.
-The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a subgraph, and a 1% of query fees for blockchain data.
+The Graph is designed with multiple burning mechanisms to offset new token issuance. Approximately 1% of the GRT supply is burned annually through various activities on the network, and this number has been increasing as network activity continues to grow. These burning activities include a 0.5% delegation tax whenever a Delegator delegates GRT to an Indexer, a 1% curation tax when Curators signal on a Subgraph, and a 1% of query fees for blockchain data.

diff --git a/website/src/pages/cs/sps/introduction.mdx b/website/src/pages/cs/sps/introduction.mdx
index f0180d6a569b..4938d23102e4 100644
--- a/website/src/pages/cs/sps/introduction.mdx
+++ b/website/src/pages/cs/sps/introduction.mdx
@@ -3,21 +3,21 @@ title: Introduction to Substreams-Powered Subgraphs
sidebarTitle: Úvod
---
-Boost your subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data.
+Boost your Subgraph's efficiency and scalability by using [Substreams](/substreams/introduction/) to stream pre-indexed blockchain data.
## Přehled
-Use a Substreams package (`.spkg`) as a data source to give your subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks.
+Use a Substreams package (`.spkg`) as a data source to give your Subgraph access to a stream of pre-indexed blockchain data. This enables more efficient and scalable data handling, especially with large or complex blockchain networks.
### Specifics
There are two methods of enabling this technology:
-1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a subgraph handler and move all your logic into a subgraph. This method creates the subgraph entities directly in the subgraph.
+1. **Using Substreams [triggers](/sps/triggers/)**: Consume from any Substreams module by importing the Protobuf model through a Subgraph handler and move all your logic into a Subgraph. This method creates the Subgraph entities directly in the Subgraph.
-2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your subgraph entities.
+2. **Using [Entity Changes](https://docs.substreams.dev/how-to-guides/sinks/subgraph/graph-out)**: By writing more of the logic into Substreams, you can consume the module's output directly into [graph-node](/indexing/tooling/graph-node/). In graph-node, you can use the Substreams data to create your Subgraph entities.
-You can choose where to place your logic, either in the subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node.
+You can choose where to place your logic, either in the Subgraph or Substreams. However, consider what aligns with your data needs, as Substreams has a parallelized model, and triggers are consumed linearly in the graph node.
### Další zdroje
@@ -28,3 +28,4 @@ Visit the following links for tutorials on using code-generation tooling to buil
- [Starknet](https://docs.substreams.dev/tutorials/intro-to-tutorials/starknet)
- [Injective](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/injective)
- [MANTRA](https://docs.substreams.dev/tutorials/intro-to-tutorials/on-cosmos/mantra)
+- [Stellar](https://docs.substreams.dev/tutorials/intro-to-tutorials/stellar)
diff --git a/website/src/pages/cs/sps/sps-faq.mdx b/website/src/pages/cs/sps/sps-faq.mdx
index 657b027cf5e9..25e77dc3c7f1 100644
--- a/website/src/pages/cs/sps/sps-faq.mdx
+++ b/website/src/pages/cs/sps/sps-faq.mdx
@@ -11,21 +11,21 @@ Specifically, it's a blockchain-agnostic, parallelized, and streaming-first engi
Substreams is developed by [StreamingFast](https://www.streamingfast.io/). Visit the [Substreams Documentation](/substreams/introduction/) to learn more about Substreams.
-## Co jsou substreamu napájen podgrafy?
+## What are Substreams-powered Subgraphs?
-[Substreams-powered subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with subgraph entities.
+[Substreams-powered Subgraphs](/sps/introduction/) combine the power of Substreams with the queryability of Subgraphs. When publishing a Substreams-powered Subgraph, the data produced by the Substreams transformations, can [output entity changes](https://github.com/streamingfast/substreams-sink-entity-changes/blob/develop/substreams-entity-change/src/tables.rs) that are compatible with Subgraph entities.
-If you are already familiar with subgraph development, note that Substreams-powered subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of subgraphs, including a dynamic and flexible GraphQL API.
+If you are already familiar with Subgraph development, note that Substreams-powered Subgraphs can then be queried just as if it had been produced by the AssemblyScript transformation layer. This provides all the benefits of Subgraphs, including a dynamic and flexible GraphQL API.
-## Jak se liší substream, které jsou napájeny podgrafy, od podgrafů?
+## How are Substreams-powered Subgraphs different from Subgraphs?
Subgraphs are made up of datasources which specify onchain events, and how those events should be transformed via handlers written in Assemblyscript. These events are processed sequentially, based on the order in which events happen onchain.
-By contrast, substreams-powered subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times.
+By contrast, substreams-powered Subgraphs have a single datasource which references a substreams package, which is processed by the Graph Node. Substreams have access to additional granular onchain data compared to conventional Subgraphs, and can also benefit from massively parallelised processing, which can mean much faster processing times.
-## Jaké jsou výhody používání substreamu, které jsou založeny na podgraf?
+## What are the benefits of using Substreams-powered Subgraphs?
-Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka.
+Substreams-powered Subgraphs combine all the benefits of Substreams with the queryability of Subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka.
## Jaké jsou výhody Substreams?
@@ -35,7 +35,7 @@ Používání ubstreams má mnoho výhod, mimo jiné:
- Vysoce výkonné indexování: Řádově rychlejší indexování prostřednictvím rozsáhlých klastrů paralelních operací (viz BigQuery).
-- Umyvadlo kdekoli: Data můžete ukládat kamkoli chcete: Vložte data do PostgreSQL, MongoDB, Kafka, podgrafy, ploché soubory, tabulky Google.
+- Sink anywhere: Sink your data to anywhere you want: PostgreSQL, MongoDB, Kafka, Subgraphs, flat files, Google Sheets.
- Programovatelné: Pomocí kódu můžete přizpůsobit extrakci, provádět agregace v čase transformace a modelovat výstup pro více zdrojů.
@@ -63,17 +63,17 @@ Používání Firehose přináší mnoho výhod, včetně:
- Využívá ploché soubory: Blockchain data jsou extrahována do plochých souborů, což je nejlevnější a nejoptimálnější dostupný výpočetní zdroj.
-## Kde mohou vývojáři získat více informací o substreamu, které jsou založeny na podgraf a substreamu?
+## Where can developers access more information about Substreams-powered Subgraphs and Substreams?
The [Substreams documentation](/substreams/introduction/) will teach you how to build Substreams modules.
-The [Substreams-powered subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph.
+The [Substreams-powered Subgraphs documentation](/sps/introduction/) will show you how to package them for deployment on The Graph.
The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code.
## Jaká je role modulů Rust v Substreamu?
-Moduly Rust jsou ekvivalentem mapovačů AssemblyScript v podgraf. Jsou kompilovány do WASM podobným způsobem, ale programovací model umožňuje paralelní provádění. Definují druh transformací a agregací, které chcete aplikovat na surová data blockchainu.
+Rust modules are the equivalent of the AssemblyScript mappers in Subgraphs. They are compiled to WASM in a similar way, but the programming model allows for parallel execution. They define the sort of transformations and aggregations you want to apply to the raw blockchain data.
See [modules documentation](https://docs.substreams.dev/reference-material/substreams-components/modules#modules) for details.
@@ -81,16 +81,16 @@ See [modules documentation](https://docs.substreams.dev/reference-material/subst
Při použití substreamů probíhá kompozice na transformační vrstvě, což umožňuje opakované použití modulů uložených v mezipaměti.
-Jako příklad může Alice vytvořit cenový modul DEX, Bob jej může použít k vytvoření agregátoru objemu pro některé tokeny, které ho zajímají, a Lisa může zkombinovat čtyři jednotlivé cenové moduly DEX a vytvořit cenové orákulum. Jediný požadavek Substreams zabalí všechny moduly těchto jednotlivců, propojí je dohromady a nabídne mnohem sofistikovanější tok dat. Tento proud pak může být použit k naplnění podgrafu a může být dotazován spotřebiteli.
+As an example, Alice can build a DEX price module, Bob can use it to build a volume aggregator for some tokens of his interest, and Lisa can combine four individual DEX price modules to create a price oracle. A single Substreams request will package all of these individual's modules, link them together, to offer a much more refined stream of data. That stream can then be used to populate a Subgraph, and be queried by consumers.
## Jak můžete vytvořit a nasadit Substreams využívající podgraf?
After [defining](/sps/introduction/) a Substreams-powered Subgraph, you can use the Graph CLI to deploy it in [Subgraph Studio](https://thegraph.com/studio/).
-## Kde najdu příklady podgrafů Substreams a Substreams-powered?
+## Where can I find examples of Substreams and Substreams-powered Subgraphs?
-Příklady podgrafů Substreams a Substreams-powered najdete na [tomto repozitáři Github](https://github.com/pinax-network/awesome-substreams).
+You can visit [this Github repo](https://github.com/pinax-network/awesome-substreams) to find examples of Substreams and Substreams-powered Subgraphs.
-## Co znamenají substreams a podgrafy napájené substreams pro síť grafů?
+## What do Substreams and Substreams-powered Subgraphs mean for The Graph Network?
Integrace slibuje mnoho výhod, včetně extrémně výkonného indexování a větší složitelnosti díky využití komunitních modulů a stavění na nich.
diff --git a/website/src/pages/cs/sps/triggers.mdx b/website/src/pages/cs/sps/triggers.mdx
index 06a8845e4daf..b0c4bea23f3d 100644
--- a/website/src/pages/cs/sps/triggers.mdx
+++ b/website/src/pages/cs/sps/triggers.mdx
@@ -6,13 +6,13 @@ Use Custom Triggers and enable the full use GraphQL.
## Přehled
-Custom Triggers allow you to send data directly into your subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer.
+Custom Triggers allow you to send data directly into your Subgraph mappings file and entities, which are similar to tables and fields. This enables you to fully use the GraphQL layer.
-By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your subgraph's handler. This ensures efficient and streamlined data management within the subgraph framework.
+By importing the Protobuf definitions emitted by your Substreams module, you can receive and process this data in your Subgraph's handler. This ensures efficient and streamlined data management within the Subgraph framework.
### Defining `handleTransactions`
-The following code demonstrates how to define a `handleTransactions` function in a subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new subgraph entity is created.
+The following code demonstrates how to define a `handleTransactions` function in a Subgraph handler. This function receives raw Substreams bytes as a parameter and decodes them into a `Transactions` object. For each transaction, a new Subgraph entity is created.
```tsx
export function handleTransactions(bytes: Uint8Array): void {
@@ -38,9 +38,9 @@ Here's what you're seeing in the `mappings.ts` file:
1. The bytes containing Substreams data are decoded into the generated `Transactions` object, this object is used like any other AssemblyScript object
2. Looping over the transactions
-3. Create a new subgraph entity for every transaction
+3. Create a new Subgraph entity for every transaction
-To go through a detailed example of a trigger-based subgraph, [check out the tutorial](/sps/tutorial/).
+To go through a detailed example of a trigger-based Subgraph, [check out the tutorial](/sps/tutorial/).
### Další zdroje
diff --git a/website/src/pages/cs/sps/tutorial.mdx b/website/src/pages/cs/sps/tutorial.mdx
index 3f98c57508bd..c1850bab04fa 100644
--- a/website/src/pages/cs/sps/tutorial.mdx
+++ b/website/src/pages/cs/sps/tutorial.mdx
@@ -1,9 +1,9 @@
---
-title: 'Tutorial: Set Up a Substreams-Powered Subgraph on Solana'
+title: "Tutorial: Set Up a Substreams-Powered Subgraph on Solana"
sidebarTitle: Tutorial
---
-Successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token.
+Successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token.
## Začněte
@@ -54,7 +54,7 @@ params: # Modify the param fields to meet your needs
### Step 2: Generate the Subgraph Manifest
-Once the project is initialized, generate a subgraph manifest by running the following command in the Dev Container:
+Once the project is initialized, generate a Subgraph manifest by running the following command in the Dev Container:
```bash
substreams codegen subgraph
@@ -73,7 +73,7 @@ dataSources:
moduleName: map_spl_transfers # Module defined in the substreams.yaml
file: ./my-project-sol-v0.1.0.spkg
mapping:
- apiVersion: 0.0.7
+ apiVersion: 0.0.9
kind: substreams/graph-entities
file: ./src/mappings.ts
handler: handleTriggers
@@ -81,7 +81,7 @@ dataSources:
### Step 3: Define Entities in `schema.graphql`
-Define the fields you want to save in your subgraph entities by updating the `schema.graphql` file.
+Define the fields you want to save in your Subgraph entities by updating the `schema.graphql` file.
Here is an example:
@@ -101,7 +101,7 @@ This schema defines a `MyTransfer` entity with fields such as `id`, `amount`, `s
With the Protobuf objects generated, you can now handle the decoded Substreams data in your `mappings.ts` file found in the `./src` directory.
-The example below demonstrates how to extract to subgraph entities the non-derived transfers associated to the Orca account id:
+The example below demonstrates how to extract to Subgraph entities the non-derived transfers associated to the Orca account id:
```ts
import { Protobuf } from 'as-proto/assembly'
@@ -140,11 +140,11 @@ To generate Protobuf objects in AssemblyScript, run the following command:
npm run protogen
```
-This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the subgraph's handler.
+This command converts the Protobuf definitions into AssemblyScript, allowing you to use them in the Subgraph's handler.
### Závěr
-Congratulations! You've successfully set up a trigger-based Substreams-powered subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case.
+Congratulations! You've successfully set up a trigger-based Substreams-powered Subgraph for a Solana SPL token. You can take the next step by customizing your schema, mappings, and modules to fit your specific use case.
### Video Tutorial
diff --git a/website/src/pages/cs/subgraphs/_meta-titles.json b/website/src/pages/cs/subgraphs/_meta-titles.json
index 0556abfc236c..c2d850dfc35c 100644
--- a/website/src/pages/cs/subgraphs/_meta-titles.json
+++ b/website/src/pages/cs/subgraphs/_meta-titles.json
@@ -1,6 +1,6 @@
{
"querying": "Querying",
"developing": "Developing",
- "cookbook": "Cookbook",
- "best-practices": "Best Practices"
+ "guides": "How-to Guides",
+ "best-practices": "Osvědčené postupy"
}
diff --git a/website/src/pages/cs/subgraphs/best-practices/avoid-eth-calls.mdx b/website/src/pages/cs/subgraphs/best-practices/avoid-eth-calls.mdx
index 3ce9c29a17a0..2783957614bf 100644
--- a/website/src/pages/cs/subgraphs/best-practices/avoid-eth-calls.mdx
+++ b/website/src/pages/cs/subgraphs/best-practices/avoid-eth-calls.mdx
@@ -1,19 +1,19 @@
---
title: Doporučený postup pro podgraf 4 - Zlepšení rychlosti indexování vyhnutím se eth_calls
-sidebarTitle: 'Subgraph Best Practice 4: Avoiding eth_calls'
+sidebarTitle: Avoiding eth_calls
---
## TLDR
-`eth_calls` jsou volání, která lze provést z podgrafu do uzlu Ethereum. Tato volání zabírají značnou dobu, než vrátí data, což zpomaluje indexování. Pokud je to možné, navrhněte chytré kontrakty tak, aby emitovaly všechna potřebná data, takže nebudete muset používat `eth_calls`.
+`eth_calls` are calls that can be made from a Subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`.
## Proč je dobré se vyhnout `eth_calls`
-Podgraf jsou optimalizovány pro indexování dat událostí emitovaných z chytré smlouvy. Podgraf může také indexovat data pocházející z `eth_call`, což však může indexování podgrafu výrazně zpomalit, protože `eth_calls` vyžadují externí volání chytrých smluv. Odezva těchto volání nezávisí na podgrafu, ale na konektivitě a odezvě dotazovaného uzlu Ethereum. Minimalizací nebo eliminací eth_calls v našich podgrafech můžeme výrazně zvýšit rychlost indexování.
+Subgraphs are optimized to index event data emitted from smart contracts. A Subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down Subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the Subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our Subgraphs, we can significantly improve our indexing speed.
### Jak vypadá eth_call?
-`eth_calls` jsou často nutné, pokud data potřebná pro podgraf nejsou dostupná prostřednictvím emitovaných událostí. Uvažujme například scénář, kdy podgraf potřebuje zjistit, zda jsou tokeny ERC20 součástí určitého poolu, ale smlouva emituje pouze základní událost `Transfer` a neemituje událost, která by obsahovala data, která potřebujeme:
+`eth_calls` are often necessary when the data required for a Subgraph is not available through emitted events. For example, consider a scenario where a Subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need:
```yaml
event Transfer(address indexed from, address indexed to, uint256 value);
@@ -44,7 +44,7 @@ export function handleTransfer(event: Transfer): void {
}
```
-To je funkční, ale není to ideální, protože to zpomaluje indexování našeho podgrafu.
+This is functional, however is not ideal as it slows down our Subgraph’s indexing.
## Jak odstranit `eth_calls`
@@ -54,7 +54,7 @@ V ideálním případě by měl být inteligentní kontrakt aktualizován tak, a
event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo);
```
-Díky této aktualizaci může podgraf přímo indexovat požadovaná data bez externích volání:
+With this update, the Subgraph can directly index the required data without external calls:
```typescript
import { Address } from '@graphprotocol/graph-ts'
@@ -96,11 +96,11 @@ calls:
Samotná obslužná rutina přistupuje k výsledku tohoto `eth_call` přesně tak, jak je uvedeno v předchozí části, a to navázáním na smlouvu a provedením volání. graph-node cachuje výsledky deklarovaných `eth_call` v paměti a volání obslužné rutiny získá výsledek z této paměťové cache místo skutečného volání RPC.
-Poznámka: Deklarované eth_calls lze provádět pouze v podgraf s verzí specVersion >= 1.2.0.
+Note: Declared eth_calls can only be made in Subgraphs with specVersion >= 1.2.0.
## Závěr
-You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs.
+You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your Subgraphs.
## Subgraph Best Practices 1-6
diff --git a/website/src/pages/cs/subgraphs/best-practices/derivedfrom.mdx b/website/src/pages/cs/subgraphs/best-practices/derivedfrom.mdx
index f6ec5a660bf2..fc9dce04c8c0 100644
--- a/website/src/pages/cs/subgraphs/best-practices/derivedfrom.mdx
+++ b/website/src/pages/cs/subgraphs/best-practices/derivedfrom.mdx
@@ -1,11 +1,11 @@
---
title: Podgraf Doporučený postup 2 - Zlepšení indexování a rychlosti dotazů pomocí @derivedFrom
-sidebarTitle: 'Subgraph Best Practice 2: Arrays with @derivedFrom'
+sidebarTitle: Arrays with @derivedFrom
---
## TLDR
-Pole ve vašem schématu mohou skutečně zpomalit výkon podgrafu, pokud jejich počet přesáhne tisíce položek. Pokud je to možné, měla by se při použití polí používat direktiva `@derivedFrom`, která zabraňuje vzniku velkých polí, zjednodušuje obslužné programy a snižuje velikost jednotlivých entit, čímž výrazně zvyšuje rychlost indexování a výkon dotazů.
+Arrays in your schema can really slow down a Subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly.
## Jak používat směrnici `@derivedFrom`
@@ -15,7 +15,7 @@ Stačí ve schématu za pole přidat směrnici `@derivedFrom`. Takto:
comments: [Comment!]! @derivedFrom(field: "post")
```
-`@derivedFrom` vytváří efektivní vztahy typu one-to-many, které umožňují dynamické přiřazení entity k více souvisejícím entitám na základě pole v související entitě. Tento přístup odstraňuje nutnost ukládat duplicitní data na obou stranách vztahu, čímž se podgraf stává efektivnějším.
+`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the Subgraph more efficient.
### Příklad případu použití pro `@derivedFrom`
@@ -60,17 +60,17 @@ type Comment @entity {
Pouhým přidáním direktivy `@derivedFrom` bude toto schéma ukládat "Komentáře“ pouze na straně "Komentáře“ vztahu a nikoli na straně "Příspěvek“ vztahu. Pole se ukládají napříč jednotlivými řádky, což umožňuje jejich výrazné rozšíření. To může vést k obzvláště velkým velikostem, pokud je jejich růst neomezený.
-Tím se nejen zefektivní náš podgraf, ale také se odemknou tři funkce:
+This will not only make our Subgraph more efficient, but it will also unlock three features:
1. Můžeme se zeptat na `Post` a zobrazit všechny jeho komentáře.
2. Můžeme provést zpětné vyhledávání a dotazovat se na jakýkoli `Komentář` a zjistit, ze kterého příspěvku pochází.
-3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings.
+3. We can use [Derived Field Loaders](/subgraphs/developing/creating/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our Subgraph mappings.
## Závěr
-Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval.
+Use the `@derivedFrom` directive in Subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval.
For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/).
diff --git a/website/src/pages/cs/subgraphs/best-practices/grafting-hotfix.mdx b/website/src/pages/cs/subgraphs/best-practices/grafting-hotfix.mdx
index 7a2dbdda86f6..541cf76d0f7a 100644
--- a/website/src/pages/cs/subgraphs/best-practices/grafting-hotfix.mdx
+++ b/website/src/pages/cs/subgraphs/best-practices/grafting-hotfix.mdx
@@ -1,26 +1,26 @@
---
title: Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment
-sidebarTitle: 'Subgraph Best Practice 6: Grafting and Hotfixing'
+sidebarTitle: Grafting and Hotfixing
---
## TLDR
-Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones.
+Grafting is a powerful feature in Subgraph development that allows you to build and deploy new Subgraphs while reusing the indexed data from existing ones.
### Přehled
-This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services.
+This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire Subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services.
## Benefits of Grafting for Hotfixes
1. **Rapid Deployment**
- - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing.
- - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted.
+ - **Minimize Downtime**: When a Subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing.
+ - **Immediate Recovery**: The new Subgraph continues from the last indexed block, ensuring that data services remain uninterrupted.
2. **Data Preservation**
- - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records.
+ - **Reuse Historical Data**: Grafting copies the existing data from the base Subgraph, so you don’t lose valuable historical records.
- **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data.
3. **Efficiency**
@@ -31,38 +31,38 @@ This feature enables quick deployment of hotfixes for critical issues, eliminati
1. **Initial Deployment Without Grafting**
- - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected.
- - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes.
+ - **Start Clean**: Always deploy your initial Subgraph without grafting to ensure that it’s stable and functions as expected.
+ - **Test Thoroughly**: Validate the Subgraph’s performance to minimize the need for future hotfixes.
2. **Implementing the Hotfix with Grafting**
- **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event.
- - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix.
- - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph.
- - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible.
+ - **Create a New Subgraph**: Develop a new Subgraph that includes the hotfix.
+ - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed Subgraph.
+ - **Deploy Quickly**: Publish the grafted Subgraph to restore service as soon as possible.
3. **Post-Hotfix Actions**
- - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue.
- - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance.
+ - **Monitor Performance**: Ensure the grafted Subgraph is indexing correctly and the hotfix resolves the issue.
+ - **Republish Without Grafting**: Once stable, deploy a new version of the Subgraph without grafting for long-term maintenance.
> Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance.
- - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph.
+ - **Update References**: Redirect any services or applications to use the new, non-grafted Subgraph.
4. **Important Considerations**
- **Careful Block Selection**: Choose the graft block number carefully to prevent data loss.
- **Tip**: Use the block number of the last correctly processed event.
- - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID.
- - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment.
- - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features.
+ - **Use Deployment ID**: Ensure you reference the Deployment ID of the base Subgraph, not the Subgraph ID.
+ - **Note**: The Deployment ID is the unique identifier for a specific Subgraph deployment.
+ - **Feature Declaration**: Remember to declare grafting in the Subgraph manifest under features.
## Example: Deploying a Hotfix with Grafting
-Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix.
+Suppose you have a Subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix.
1. **Failed Subgraph Manifest (subgraph.yaml)**
```yaml
- specVersion: 1.0.0
+ specVersion: 1.3.0
schema:
file: ./schema.graphql
dataSources:
@@ -75,7 +75,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
startBlock: 5000000
mapping:
kind: ethereum/events
- apiVersion: 0.0.7
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Withdrawal
@@ -90,7 +90,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
2. **New Grafted Subgraph Manifest (subgraph.yaml)**
```yaml
- specVersion: 1.0.0
+ specVersion: 1.3.0
schema:
file: ./schema.graphql
dataSources:
@@ -103,7 +103,7 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
startBlock: 6000001 # Block after the last indexed block
mapping:
kind: ethereum/events
- apiVersion: 0.0.7
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Withdrawal
@@ -117,16 +117,16 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
features:
- grafting
graft:
- base: QmBaseDeploymentID # Deployment ID of the failed subgraph
+ base: QmBaseDeploymentID # Deployment ID of the failed Subgraph
block: 6000000 # Last successfully indexed block
```
**Explanation:**
-- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract.
+- **Data Source Update**: The new Subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract.
- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error.
- **Grafting Configuration**:
- - **base**: Deployment ID of the failed subgraph.
+ - **base**: Deployment ID of the failed Subgraph.
- **block**: Block number where grafting should begin.
3. **Deployment Steps**
@@ -135,10 +135,10 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
- **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations.
- **Deploy the Subgraph**:
- Authenticate with the Graph CLI.
- - Deploy the new subgraph using `graph deploy`.
+ - Deploy the new Subgraph using `graph deploy`.
4. **Post-Deployment**
- - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point.
+ - **Verify Indexing**: Check that the Subgraph is indexing correctly from the graft point.
- **Monitor Data**: Ensure that new data is being captured and the hotfix is effective.
- **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability.
@@ -146,9 +146,9 @@ Suppose you have a subgraph tracking a smart contract that has stopped indexing
While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance.
-- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema.
+- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new Subgraph’s schema to be compatible with the base Subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema.
- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing.
-- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability.
+- **Deployments to The Graph Network**: Grafting is not recommended for Subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the Subgraph from scratch to ensure full compatibility and reliability.
### Risk Management
@@ -157,20 +157,20 @@ While grafting is a powerful tool for deploying hotfixes quickly, there are spec
## Závěr
-Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to:
+Grafting is an effective strategy for deploying hotfixes in Subgraph development, enabling you to:
- **Quickly Recover** from critical errors without re-indexing.
- **Preserve Historical Data**, maintaining continuity for applications and users.
- **Ensure Service Availability** by minimizing downtime during critical fixes.
-However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability.
+However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your Subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability.
## Další zdroje
- **[Grafting Documentation](/subgraphs/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting
- **[Understanding Deployment IDs](/subgraphs/querying/subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID.
-By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable.
+By incorporating grafting into your Subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable.
## Subgraph Best Practices 1-6
diff --git a/website/src/pages/cs/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx b/website/src/pages/cs/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx
index 5b058ee9d7cf..e4e191353476 100644
--- a/website/src/pages/cs/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx
+++ b/website/src/pages/cs/subgraphs/best-practices/immutable-entities-bytes-as-ids.mdx
@@ -1,6 +1,6 @@
---
title: Osvědčený postup 3 - Zlepšení indexování a výkonu dotazů pomocí neměnných entit a bytů jako ID
-sidebarTitle: 'Subgraph Best Practice 3: Immutable Entities and Bytes as IDs'
+sidebarTitle: Immutable Entities and Bytes as IDs
---
## TLDR
@@ -50,12 +50,12 @@ I když jsou možné i jiné typy ID, například String a Int8, doporučuje se
### Důvody, proč nepoužívat bajty jako IDs
1. Pokud musí být IDs entit čitelné pro člověka, například automaticky doplňované číselné IDs nebo čitelné řetězce, neměly by být použity bajty pro IDs.
-2. Při integraci dat podgrafu s jiným datovým modelem, který nepoužívá bajty jako IDs, by se bajty jako IDs neměly používat.
+2. If integrating a Subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used.
3. Zlepšení výkonu indexování a dotazování není žádoucí.
### Konkatenace s byty jako IDs
-V mnoha podgrafech se běžně používá spojování řetězců ke spojení dvou vlastností události do jediného ID, například pomocí `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. Protože se však tímto způsobem vrací řetězec, značně to zhoršuje indexování podgrafů a výkonnost dotazování.
+It is a common practice in many Subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes Subgraph indexing and querying performance.
Místo toho bychom měli použít metodu `concatI32()` pro spojování vlastností událostí. Výsledkem této strategie je ID `Bytes`, které je mnohem výkonnější.
@@ -172,7 +172,7 @@ Odpověď na dotaz:
## Závěr
-Bylo prokázáno, že použití neměnných entit i bytů jako ID výrazně zvyšuje efektivitu podgrafů. Testy konkrétně ukázaly až 28% nárůst výkonu dotazů a až 48% zrychlení indexace.
+Using both Immutable Entities and Bytes as IDs has been shown to markedly improve Subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds.
Více informací o používání nezměnitelných entit a bytů jako ID najdete v tomto příspěvku na blogu Davida Lutterkorta, softwarového inženýra ve společnosti Edge & Node: [Dvě jednoduchá vylepšení výkonu podgrafu](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/).
diff --git a/website/src/pages/cs/subgraphs/best-practices/pruning.mdx b/website/src/pages/cs/subgraphs/best-practices/pruning.mdx
index e6b23f71c409..6fd068f449d6 100644
--- a/website/src/pages/cs/subgraphs/best-practices/pruning.mdx
+++ b/website/src/pages/cs/subgraphs/best-practices/pruning.mdx
@@ -1,11 +1,11 @@
---
title: Doporučený postup 1 - Zlepšení rychlosti dotazu pomocí ořezávání podgrafů
-sidebarTitle: 'Subgraph Best Practice 1: Pruning with indexerHints'
+sidebarTitle: Pruning with indexerHints
---
## TLDR
-[Pruning](/developing/creating-a-subgraph/#prune) odstraní archivní entity z databáze podgrafu až do daného bloku a odstranění nepoužívaných entit z databáze podgrafu zlepší výkonnost dotazu podgrafu, často výrazně. Použití `indexerHints` je snadný způsob, jak podgraf ořezat.
+[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the Subgraph’s database up to a given block, and removing unused entities from a Subgraph’s database will improve a Subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a Subgraph.
## Jak prořezat podgraf pomocí `indexerHints`
@@ -13,14 +13,14 @@ Přidejte do manifestu sekci `indexerHints`.
`indexerHints` má tři možnosti `prune`:
-- `prune: auto`: Udržuje minimální potřebnou historii nastavenou indexátorem, čímž optimalizuje výkon dotazu. Toto je obecně doporučené nastavení a je výchozí pro všechny podgrafy vytvořené pomocí `graph-cli` >= 0.66.0.
+- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all Subgraphs created by `graph-cli` >= 0.66.0.
- `prune: `: Nastaví vlastní omezení počtu historických bloků, které se mají zachovat.
- `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/subgraphs/querying/graphql-api/#time-travel-queries) are desired.
-Aktualizací souboru `subgraph.yaml` můžeme do podgrafů přidat `indexerHints`:
+We can add `indexerHints` to our Subgraphs by updating our `subgraph.yaml`:
```yaml
-specVersion: 1.0.0
+specVersion: 1.3.0
schema:
file: ./schema.graphql
indexerHints:
@@ -39,7 +39,7 @@ dataSources:
## Závěr
-Ořezávání pomocí `indexerHints` je osvědčeným postupem pro vývoj podgrafů, který nabízí významné zlepšení výkonu dotazů.
+Pruning using `indexerHints` is a best practice for Subgraph development, offering significant query performance improvements.
## Subgraph Best Practices 1-6
diff --git a/website/src/pages/cs/subgraphs/best-practices/timeseries.mdx b/website/src/pages/cs/subgraphs/best-practices/timeseries.mdx
index f35ab0913563..dae73ede9ff3 100644
--- a/website/src/pages/cs/subgraphs/best-practices/timeseries.mdx
+++ b/website/src/pages/cs/subgraphs/best-practices/timeseries.mdx
@@ -1,11 +1,11 @@
---
title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations
-sidebarTitle: 'Subgraph Best Practice 5: Timeseries and Aggregations'
+sidebarTitle: Timeseries and Aggregations
---
## TLDR
-Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance.
+Leveraging the new time-series and aggregations feature in Subgraphs can significantly enhance both indexing speed and query performance.
## Přehled
@@ -36,6 +36,10 @@ Timeseries and aggregations reduce data processing overhead and accelerate queri
## How to Implement Timeseries and Aggregations
+### Prerequisites
+
+You need `spec version 1.1.0` for this feature.
+
### Defining Timeseries Entities
A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements:
@@ -51,7 +55,7 @@ Příklad:
type Data @entity(timeseries: true) {
id: Int8!
timestamp: Timestamp!
- price: BigDecimal!
+ amount: BigDecimal!
}
```
@@ -68,11 +72,11 @@ Příklad:
type Stats @aggregation(intervals: ["hour", "day"], source: "Data") {
id: Int8!
timestamp: Timestamp!
- sum: BigDecimal! @aggregate(fn: "sum", arg: "price")
+ sum: BigDecimal! @aggregate(fn: "sum", arg: "amount")
}
```
-In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum.
+In this example, Stats aggregates the amount field from Data over hourly and daily intervals, computing the sum.
### Querying Aggregated Data
@@ -172,13 +176,13 @@ Supported operators and functions include basic arithmetic (+, -, \_, /), compar
### Závěr
-Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach:
+Implementing timeseries and aggregations in Subgraphs is a best practice for projects dealing with time-based data. This approach:
- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead.
- Simplifies Development: Eliminates the need for manual aggregation logic in mappings.
- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness.
-By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs.
+By adopting this pattern, developers can build more efficient and scalable Subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your Subgraphs.
## Subgraph Best Practices 1-6
diff --git a/website/src/pages/cs/subgraphs/billing.mdx b/website/src/pages/cs/subgraphs/billing.mdx
index 4118bf1d451a..b78c375c4aee 100644
--- a/website/src/pages/cs/subgraphs/billing.mdx
+++ b/website/src/pages/cs/subgraphs/billing.mdx
@@ -4,12 +4,14 @@ title: Fakturace
## Querying Plans
-There are two plans to use when querying subgraphs on The Graph Network.
+There are two plans to use when querying Subgraphs on The Graph Network.
- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp.
- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases.
+Learn more about pricing [here](https://thegraph.com/studio-pricing/).
+
## Query Payments with credit card
diff --git a/website/src/pages/cs/subgraphs/cookbook/arweave.mdx b/website/src/pages/cs/subgraphs/cookbook/arweave.mdx
index d59897ad4e03..dff8facf77d4 100644
--- a/website/src/pages/cs/subgraphs/cookbook/arweave.mdx
+++ b/website/src/pages/cs/subgraphs/cookbook/arweave.mdx
@@ -2,7 +2,7 @@
title: Vytváření podgrafů na Arweave
---
-> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs!
+> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs!
V této příručce se dozvíte, jak vytvořit a nasadit subgrafy pro indexování blockchainu Arweave.
@@ -25,12 +25,12 @@ The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are
Abyste mohli sestavit a nasadit Arweave Subgraphs, potřebujete dva balíčky:
-1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`.
-2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`.
+1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`.
+2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`.
## Komponenty podgrafu
-Podgraf má tři Komponenty:
+There are three components of a Subgraph:
### 1. Manifest - `subgraph.yaml`
@@ -40,25 +40,25 @@ Definuje zdroje dat, které jsou předmětem zájmu, a způsob jejich zpracován
Zde definujete, na která data se chcete po indexování subgrafu pomocí jazyka GraphQL dotazovat. Je to vlastně podobné modelu pro API, kde model definuje strukturu těla požadavku.
-The requirements for Arweave subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema).
+The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema).
### 3. AssemblyScript Mappings - `mapping.ts`
Jedná se o logiku, která určuje, jak mají být data načtena a uložena, když někdo komunikuje se zdroji dat, kterým nasloucháte. Data se přeloží a uloží na základě schématu, které jste uvedli.
-Při vývoji podgrafů existují dva klíčové příkazy:
+During Subgraph development there are two key commands:
```
-$ graph codegen # generuje typy ze souboru se schématem identifikovaným v manifestu
-$ graph build # vygeneruje webové sestavení ze souborů AssemblyScript a připraví všechny dílčí soubory do složky /build
+$ graph codegen # generates types from the schema file identified in the manifest
+$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder
```
## Definice podgrafu Manifest
-The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for an Arweave subgraph:
+The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph:
```yaml
-specVersion: 0.0.5
+specVersion: 1.3.0
description: Arweave Blocks Indexing
schema:
file: ./schema.graphql # link to the schema file
@@ -70,7 +70,7 @@ dataSources:
owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet
startBlock: 0 # set this to 0 to start indexing from chain genesis
mapping:
- apiVersion: 0.0.5
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/blocks.ts # link to the file with the Assemblyscript mappings
entities:
@@ -82,7 +82,7 @@ dataSources:
- handler: handleTx # the function name in the mapping file
```
-- Arweave subgraphs introduce a new kind of data source (`arweave`)
+- Arweave Subgraphs introduce a new kind of data source (`arweave`)
- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet`
- Zdroje dat Arweave obsahují nepovinné pole source.owner, což je veřejný klíč peněženky Arweave
@@ -99,7 +99,7 @@ Datové zdroje Arweave podporují dva typy zpracovatelů:
## Definice schématu
-Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
+Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
## AssemblyScript Mapování
@@ -152,7 +152,7 @@ Writing the mappings of an Arweave Subgraph is very similar to writing the mappi
## Nasazení podgrafu Arweave v Podgraf Studio
-Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command.
+Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command.
```bash
graph deploy --access-token
@@ -160,25 +160,25 @@ graph deploy --access-token
## Dotazování podgrafu Arweave
-The GraphQL endpoint for Arweave subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information.
+The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information.
## Příklady podgrafů
-Zde je příklad podgrafu pro referenci:
+Here is an example Subgraph for reference:
-- [Example subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions)
+- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions)
## FAQ
-### Může podgraf indexovat Arweave a další řetězce?
+### Can a Subgraph index Arweave and other chains?
-Ne, podgraf může podporovat zdroje dat pouze z jednoho řetězce/sítě.
+No, a Subgraph can only support data sources from one chain/network.
### Mohu indexovat uložené soubory v Arweave?
V současné době The Graph indexuje pouze Arweave jako blockchain (jeho bloky a transakce).
-### Mohu identifikovat svazky Bundlr ve svém podgrafu?
+### Can I identify Bundlr bundles in my Subgraph?
Toto není aktuálně podporováno.
@@ -188,7 +188,7 @@ Source.owner může být veřejný klíč uživatele nebo adresa účtu.
### Jaký je aktuální formát šifrování?
-Data is generally passed into the mappings as Bytes, which if stored directly is returned in the subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/).
+Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/).
The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`:
diff --git a/website/src/pages/cs/subgraphs/cookbook/enums.mdx b/website/src/pages/cs/subgraphs/cookbook/enums.mdx
index 71f3f784a0eb..7cc0e6c0ed78 100644
--- a/website/src/pages/cs/subgraphs/cookbook/enums.mdx
+++ b/website/src/pages/cs/subgraphs/cookbook/enums.mdx
@@ -10,7 +10,7 @@ Enums, or enumeration types, are a specific data type that allows you to define
### Example of Enums in Your Schema
-If you're building a subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned.
+If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned.
You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity.
@@ -65,7 +65,7 @@ Enums provide type safety, minimize typo risks, and ensure consistent and reliab
> Note: The following guide uses the CryptoCoven NFT smart contract.
-To define enums for the various marketplaces where NFTs are traded, use the following in your subgraph schema:
+To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema:
```gql
# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint)
@@ -80,7 +80,7 @@ enum Marketplace {
## Using Enums for NFT Marketplaces
-Once defined, enums can be used throughout your subgraph to categorize transactions or events.
+Once defined, enums can be used throughout your Subgraph to categorize transactions or events.
For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum.
diff --git a/website/src/pages/cs/subgraphs/cookbook/grafting.mdx b/website/src/pages/cs/subgraphs/cookbook/grafting.mdx
index ca0ab0367451..a7bad43c9c1f 100644
--- a/website/src/pages/cs/subgraphs/cookbook/grafting.mdx
+++ b/website/src/pages/cs/subgraphs/cookbook/grafting.mdx
@@ -2,13 +2,13 @@
title: Nahrazení smlouvy a zachování její historie pomocí roubování
---
-V této příručce se dozvíte, jak vytvářet a nasazovat nové podgrafy roubováním stávajících podgrafů.
+In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs.
## Co je to roubování?
-Při roubování se znovu použijí data z existujícího podgrafu a začne se indexovat v pozdějším bloku. To je užitečné během vývoje, abyste se rychle dostali přes jednoduché chyby v mapování nebo abyste dočasně znovu zprovoznili existující podgraf po jeho selhání. Také ji lze použít při přidávání funkce do podgrafu, které trvá dlouho, než se indexuje od začátku.
+Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch.
-Štěpovaný podgraf může používat schéma GraphQL, které není totožné se schématem základního podgrafu, ale je s ním pouze kompatibilní. Musí to být platné schéma podgrafu jako takové, ale může se od schématu základního podgrafu odchýlit následujícími způsoby:
+The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways:
- Přidává nebo odebírá typy entit
- Odstraňuje atributy z typů entit
@@ -22,38 +22,38 @@ Další informace naleznete na:
- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs)
-In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract.
+In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract.
## Důležité upozornění k roubování při aktualizaci na síť
-> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network
+> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network
### Proč je to důležité?
-Štěpování je výkonná funkce, která umožňuje "naroubovat" jeden podgraf na druhý, čímž efektivně přenese historická data ze stávajícího podgrafu do nové verze. Podgraf není možné naroubovat ze Sítě grafů zpět do Podgraf Studio.
+Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio.
### Osvědčené postupy
-**Initial Migration**: when you first deploy your subgraph to the decentralized network, do so without grafting. Ensure that the subgraph is stable and functioning as expected.
+**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected.
-**Subsequent Updates**: once your subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data.
+**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data.
Dodržováním těchto pokynů minimalizujete rizika a zajistíte hladší průběh migrace.
## Vytvoření existujícího podgrafu
-Building subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided:
+Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided:
- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial)
-> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit).
+> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit).
## Definice podgrafu Manifest
-The subgraph manifest `subgraph.yaml` identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest that you will use:
+The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use:
```yaml
-specVersion: 0.0.4
+specVersion: 1.3.0
schema:
file: ./schema.graphql
dataSources:
@@ -66,7 +66,7 @@ dataSources:
startBlock: 5955690
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Withdrawal
@@ -85,27 +85,27 @@ dataSources:
## Definice manifestu roubování
-Roubování vyžaduje přidání dvou nových položek do původního manifestu podgrafu:
+Grafting requires adding two new items to the original Subgraph manifest:
```yaml
---
features:
- grafting # feature name
graft:
- base: Qm... # subgraph ID of base subgraph
+ base: Qm... # Subgraph ID of base Subgraph
block: 5956000 # block number
```
- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features).
-- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on.
+- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on.
-The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting
+The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting
## Nasazení základního podgrafu
-1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example`
-2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo
-3. Po dokončení ověřte, zda se podgraf správně indexuje. Pokud spustíte následující příkaz v The Graph Playground
+1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example`
+2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo
+3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground
```graphql
{
@@ -138,16 +138,16 @@ Vrátí něco takového:
}
```
-Jakmile ověříte, že se podgraf správně indexuje, můžete jej rychle aktualizovat pomocí roubování.
+Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting.
## Nasazení podgrafu roubování
Náhradní podgraf.yaml bude mít novou adresu smlouvy. K tomu může dojít při aktualizaci dapp, novém nasazení kontraktu atd.
-1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement`
-2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio.
-3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo
-4. Po dokončení ověřte, zda se podgraf správně indexuje. Pokud spustíte následující příkaz v The Graph Playground
+1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement`
+2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio.
+3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo
+4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground
```graphql
{
@@ -185,9 +185,9 @@ Měla by vrátit následující:
}
```
-You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph.
+You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph.
-Gratulujeme! Úspěšně jste naroubovali podgraf na jiný podgraf.
+Congrats! You have successfully grafted a Subgraph onto another Subgraph.
## Další zdroje
diff --git a/website/src/pages/cs/subgraphs/cookbook/near.mdx b/website/src/pages/cs/subgraphs/cookbook/near.mdx
index dc65c11da629..275c2aba0fd4 100644
--- a/website/src/pages/cs/subgraphs/cookbook/near.mdx
+++ b/website/src/pages/cs/subgraphs/cookbook/near.mdx
@@ -2,17 +2,17 @@
title: Vytváření podgrafů v NEAR
---
-This guide is an introduction to building subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/).
+This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/).
## Co je NEAR?
[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information.
-## Co jsou podgrafy NEAR?
+## What are NEAR Subgraphs?
-The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build subgraphs to index their smart contracts.
+The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts.
-Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR subgraphs:
+Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs:
- Obsluhy bloků: jsou spouštěny při každém novém bloku.
- Obsluhy příjmu: spouštějí se pokaždé, když je zpráva provedena na zadaném účtu.
@@ -23,35 +23,35 @@ Subgraphs are event-based, which means that they listen for and then process onc
## Sestavení podgrafu NEAR
-`@graphprotocol/graph-cli` is a command-line tool for building and deploying subgraphs.
+`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs.
-`@graphprotocol/graph-ts` is a library of subgraph-specific types.
+`@graphprotocol/graph-ts` is a library of Subgraph-specific types.
-NEAR subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`.
+NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`.
-> Vytváření subgrafu NEAR je velmi podobné vytváření subgrafu, který indexuje Ethereum.
+> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum.
-Definice podgrafů má tři aspekty:
+There are three aspects of Subgraph definition:
-**subgraph.yaml:** the subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source.
+**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source.
-**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema).
+**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema).
**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality.
-Při vývoji podgrafů existují dva klíčové příkazy:
+During Subgraph development there are two key commands:
```bash
-$ graph codegen # generuje typy ze souboru se schématem identifikovaným v manifestu
-$ graph build # vygeneruje webové sestavení ze souborů AssemblyScript a připraví všechny dílčí soubory do složky /build
+$ graph codegen # generates types from the schema file identified in the manifest
+$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder
```
### Definice podgrafu Manifest
-The subgraph manifest (`subgraph.yaml`) identifies the data sources for the subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example subgraph manifest for a NEAR subgraph:
+The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph:
```yaml
-specVersion: 0.0.2
+specVersion: 1.3.0
schema:
file: ./src/schema.graphql # link to the schema file
dataSources:
@@ -61,7 +61,7 @@ dataSources:
account: app.good-morning.near # This data source will monitor this account
startBlock: 10662188 # Required for NEAR
mapping:
- apiVersion: 0.0.5
+ apiVersion: 0.0.9
language: wasm/assemblyscript
blockHandlers:
- handler: handleNewBlock # the function name in the mapping file
@@ -70,7 +70,7 @@ dataSources:
file: ./src/mapping.ts # link to the file with the Assemblyscript mappings
```
-- NEAR subgraphs introduce a new `kind` of data source (`near`)
+- NEAR Subgraphs introduce a new `kind` of data source (`near`)
- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet`
- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account.
- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted.
@@ -92,7 +92,7 @@ Zdroje dat NEAR podporují dva typy zpracovatelů:
### Definice schématu
-Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
+Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
### AssemblyScript Mapování
@@ -165,31 +165,31 @@ These types are passed to block & receipt handlers:
- Block handlers will receive a `Block`
- Receipt handlers will receive a `ReceiptWithOutcome`
-Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR subgraph developers during mapping execution.
+Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution.
This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs.
## Nasazení podgrafu NEAR
-Once you have a built subgraph, it is time to deploy it to Graph Node for indexing. NEAR subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released).
+Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released).
Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names:
- `near-mainnet`
- `near-testnet`
-More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/).
+More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/).
-As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph".
+As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph".
-Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command:
+Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command:
```sh
-$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
-$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash
+$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
+$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
```
-Konfigurace uzlů závisí na tom, kde je podgraf nasazen.
+The node configuration will depend on where the Subgraph is being deployed.
### Podgraf Studio
@@ -204,7 +204,7 @@ graph deploy
graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001
```
-Jakmile je podgraf nasazen, bude indexován pomocí Graph Node. Jeho průběh můžete zkontrolovat dotazem na samotný podgraf:
+Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself:
```graphql
{
@@ -228,11 +228,11 @@ Brzy vám poskytneme další informace o provozu výše uvedených komponent.
## Dotazování podgrafu NEAR
-The GraphQL endpoint for NEAR subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information.
+The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information.
## Příklady podgrafů
-Zde je několik příkladů podgrafů:
+Here are some example Subgraphs for reference:
[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks)
@@ -242,13 +242,13 @@ Zde je několik příkladů podgrafů:
### Jak funguje beta verze?
-Podpora NEAR je ve fázi beta, což znamená, že v API může dojít ke změnám, protože budeme pokračovat ve zdokonalování integrace. Napište nám prosím na adresu near@thegraph.com, abychom vás mohli podpořit při vytváření podgrafů NEAR a informovat vás o nejnovějším vývoji!
+NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments!
-### Může podgraf indexovat řetězce NEAR i EVM?
+### Can a Subgraph index both NEAR and EVM chains?
-Ne, podgraf může podporovat zdroje dat pouze z jednoho řetězce/sítě.
+No, a Subgraph can only support data sources from one chain/network.
-### Mohou podgrafy reagovat na specifičtější spouštěče?
+### Can Subgraphs react to more specific triggers?
V současné době jsou podporovány pouze spouštěče Blok a Příjem. Zkoumáme spouštěče pro volání funkcí na zadaném účtu. Máme také zájem o podporu spouštěčů událostí, jakmile bude mít NEAR nativní podporu událostí.
@@ -262,21 +262,21 @@ accounts:
- mintbase1.near
```
-### Mohou podgrafy NEAR během mapování volat zobrazení na účty NEAR?
+### Can NEAR Subgraphs make view calls to NEAR accounts during mappings?
To není podporováno. Vyhodnocujeme, zda je tato funkce pro indexování nutná.
-### Mohu v podgrafu NEAR používat šablony zdrojů dat?
+### Can I use data source templates in my NEAR Subgraph?
Tato funkce není v současné době podporována. Vyhodnocujeme, zda je tato funkce pro indexování nutná.
-### Podgrafy Ethereum podporují verze "pending" a "current", jak mohu nasadit verzi "pending" podgrafu NEAR?
+### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph?
-Pro podgrafy NEAR zatím nejsou podporovány čekající funkce. V mezidobí můžete novou verzi nasadit do jiného "pojmenovaného" podgrafu a po jeho synchronizaci s hlavou řetězce ji můžete znovu nasadit do svého hlavního "pojmenovaného" podgrafu, který bude používat stejné ID nasazení, takže hlavní podgraf bude okamžitě synchronizován.
+Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced.
-### Moje otázka nebyla zodpovězena, kde mohu získat další pomoc při vytváření podgrafů NEAR?
+### My question hasn't been answered, where can I get more help building NEAR Subgraphs?
-If it is a general question about subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com.
+If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com.
## Odkazy:
diff --git a/website/src/pages/cs/subgraphs/cookbook/polymarket.mdx b/website/src/pages/cs/subgraphs/cookbook/polymarket.mdx
index 2edab84a377b..74efe387b0d7 100644
--- a/website/src/pages/cs/subgraphs/cookbook/polymarket.mdx
+++ b/website/src/pages/cs/subgraphs/cookbook/polymarket.mdx
@@ -3,17 +3,17 @@ title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph
sidebarTitle: Query Polymarket Data
---
-Query Polymarket’s onchain data using GraphQL via subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains.
+Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains.
## Polymarket Subgraph on Graph Explorer
-You can see an interactive query playground on the [Polymarket subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query.
+You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query.

## How to use the Visual Query Editor
-The visual query editor helps you test sample queries from your subgraph.
+The visual query editor helps you test sample queries from your Subgraph.
You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want.
@@ -73,7 +73,7 @@ You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on
## Polymarket's GraphQL Schema
-The schema for this subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql).
+The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql).
### Polymarket Subgraph Endpoint
@@ -88,7 +88,7 @@ The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegra
1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet
2. Go to https://thegraph.com/studio/apikeys/ to create an API key
-You can use this API key on any subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket.
+You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket.
100k queries per month are free which is perfect for your side project!
@@ -143,6 +143,6 @@ axios(graphQLRequest)
### Additional resources
-For more information about querying data from your subgraph, read more [here](/subgraphs/querying/introduction/).
+For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/).
-To explore all the ways you can optimize & customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/).
+To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/).
diff --git a/website/src/pages/cs/subgraphs/cookbook/secure-api-keys-nextjs.mdx b/website/src/pages/cs/subgraphs/cookbook/secure-api-keys-nextjs.mdx
index de502a0ed526..d311cfa5117e 100644
--- a/website/src/pages/cs/subgraphs/cookbook/secure-api-keys-nextjs.mdx
+++ b/website/src/pages/cs/subgraphs/cookbook/secure-api-keys-nextjs.mdx
@@ -4,9 +4,9 @@ title: Jak zabezpečit klíče API pomocí komponent serveru Next.js
## Přehled
-K řádnému zabezpečení našeho klíče API před odhalením ve frontendu naší aplikace můžeme použít [komponenty serveru Next.js](https://nextjs.org/docs/app/building-your-application/rendering/server-components). Pro další zvýšení zabezpečení našeho klíče API můžeme také [omezit náš klíč API na určité podgrafy nebo domény v Podgraf Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key).
+We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key).
-V této kuchařce probereme, jak vytvořit serverovou komponentu Next.js, která se dotazuje na podgraf a zároveň skrývá klíč API před frontend.
+In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend.
### Upozornění
@@ -18,7 +18,7 @@ V této kuchařce probereme, jak vytvořit serverovou komponentu Next.js, která
Ve standardní aplikaci React mohou být klíče API obsažené v kódu frontendu vystaveny na straně klienta, což představuje bezpečnostní riziko. Soubory `.env` se sice běžně používají, ale plně klíče nechrání, protože kód Reactu se spouští na straně klienta a vystavuje klíč API v hlavičkách. Serverové komponenty Next.js tento problém řeší tím, že citlivé operace zpracovávají na straně serveru.
-### Použití vykreslování na straně klienta k dotazování podgrafu
+### Using client-side rendering to query a Subgraph

diff --git a/website/src/pages/cs/subgraphs/cookbook/subgraph-composition-three-sources.mdx b/website/src/pages/cs/subgraphs/cookbook/subgraph-composition-three-sources.mdx
new file mode 100644
index 000000000000..0b4847244981
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/cookbook/subgraph-composition-three-sources.mdx
@@ -0,0 +1,98 @@
+---
+title: Aggregate Data Using Subgraph Composition
+sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs
+---
+
+Optimize your Subgraph by merging data from three independent, source Subgraphs into a single composable Subgraph to enhance data aggregation.
+
+> Important Reminders:
+>
+> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/).
+> - This feature requires `specVersion` 1.3.0.
+
+## Přehled
+
+Subgraph composition empowers you to use one Subgraph as a data source for another, allowing it to consume and respond to entity changes. Instead of fetching onchain data directly, a Subgraph can listen for updates from another Subgraph and react to changes. This is useful for aggregating data from multiple Subgraphs or triggering actions based on external updates.
+
+## Prerequisites
+
+To deploy **all** Subgraphs locally, you must have the following:
+
+- A [Graph Node](https://github.com/graphprotocol/graph-node) instance running locally
+- An [IPFS](https://docs.ipfs.tech/) instance running locally
+- [Node.js](https://nodejs.org) and npm
+
+## Začněte
+
+The following guide provides examples for defining three source Subgraphs to create one powerful composed Subgraph.
+
+### Specifics
+
+- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts.
+- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality.
+- Each source Subgraph is optimized with a specific entity.
+- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance.
+
+### Step 1. Deploy Block Time Source Subgraph
+
+This first source Subgraph calculates the block time for each block.
+
+- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined.
+- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly.
+
+To deploy this Subgraph locally, run the following commands:
+
+```bash
+npm install
+npm run codegen
+npm run build
+npm run create-local
+npm run deploy-local
+```
+
+### Step 2. Deploy Block Cost Source Subgraph
+
+This second source Subgraph indexes the cost of each block.
+
+#### Key Functions
+
+- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields.
+- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly.
+
+To deploy this Subgraph locally, run the same commands as above.
+
+### Step 3. Define Block Size in Source Subgraph
+
+This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above.
+
+#### Key Functions
+
+- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size.
+- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly.
+
+### Step 4. Combine Into Block Stats Subgraph
+
+This composed Subgraph combines and aggregates the information from the three source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above.
+
+> Note:
+>
+> - Any change to a source Subgraph will likely generate a new deployment ID.
+> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes.
+> - All source Subgraphs should be deployed before the composed Subgraph is deployed.
+
+#### Key Functions
+
+- It provides a consolidated data model that encompasses all relevant block metrics.
+- It combines data from three source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses.
+
+## Key Takeaways
+
+- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs.
+- The setup includes the deployment of three source Subgraphs and one final deployment of the composed Subgraph.
+- This feature unlocks scalability, simplifying both development and maintenance efficiency.
+
+## Další zdroje
+
+- Check out all the code for this example in [this GitHub repo](https://github.com/isum/subgraph-composition-example).
+- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/).
+- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations).
diff --git a/website/src/pages/cs/subgraphs/cookbook/subgraph-composition.mdx b/website/src/pages/cs/subgraphs/cookbook/subgraph-composition.mdx
new file mode 100644
index 000000000000..f2b7abeae26a
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/cookbook/subgraph-composition.mdx
@@ -0,0 +1,139 @@
+---
+title: Enhance Your Subgraph Build Using Subgraph Composition with Sushiswap v3 on Base
+sidebarTitle: Using Subgraph Composition with Sushiswap v3 on Base
+---
+
+Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it.
+
+> Important Reminders:
+>
+> - Subgraph composition is built into the CLI, and you can deploy with [Subgraph Studio](https://thegraph.com/studio/).
+> - You can use existing Subgraphs, but they must be redeployed with `specVersion` 1.3.0, which doesn't require you to write new code.
+> - You may want to restructure your Subgraph to split out the logic as you move to a composable Subgraph world.
+
+## Úvod
+
+Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused subgraphs that collectively form a larger, interconnected dataset.
+
+### Benefits of Composition
+
+Subgraph composition is a powerful feature for scaling, allowing you to:
+
+- Reuse, mix, and combine existing data
+- Streamline development and queries
+- Use multiple data sources (up to five source Subgraphs)
+- Speed up your Subgraph's syncing speed
+- Handle errors and optimize the resync
+
+## Architecture Overview
+
+The setup for this example involves two Subgraphs:
+
+1. **Source Subgraph**: Tracks event data as entities.
+2. **Dependent Subgraph**: Uses the source Subgraph as a data source.
+
+You can find these in the `source` and `dependent` directories.
+
+- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts.
+- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers.
+
+While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature.
+
+### Source Subgraph
+
+The source Subgraph tracks events from the Sushiswap v3 Subgraph on the Base chain. This Subgraph's configuration file is `source/subgraph.yaml`.
+
+> The `source/subgraph.yaml` employs the advanced Subgraph feature, [declarative `eth_calls`](https://thegraph.com/docs/en/subgraphs/developing/creating/advanced/#declared-eth_call). To review the code for this `source/subgraph.yaml`, check out the [source Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/a5f13cb4b961f92d5c5631dca589c54feb1c0a19/source/subgraph.yaml).
+
+### Dependent Subgraph
+
+The dependent Subgraph is in the `dependent/subgraph.yaml` file, which specifies the source Subgraph as a data source. This Subgraph uses entities from the source to trigger specific actions based on changes to those entities.
+
+> To review the code for this `dependent/subgraph.yaml`, check out the [dependent Subgraph example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph/blob/main/dependant/subgraph.yaml).
+
+## Začněte
+
+The following is a guide that illustrates how to use one Subgraph as a data source for another. This example uses:
+
+- Sushiswap v3 Subgraph on Base chain
+- Two Subgraphs (but you can use up to **5 source** Subgraphs in your own development).
+
+### Step 1. Set Up Your Source Subgraph
+
+To set the source Subgraph as a data source in the dependent Subgraph, include the following in `subgraph.yaml`:
+
+```yaml
+specVersion: 1.3.0
+schema:
+ file: ./schema.graphql
+dataSources:
+ - kind: subgraph
+ name: Factory
+ network: base
+ source:
+ address: 'QmdXu8byAFCGSDWsB5gMQjWr6GUvEVB7S1hemfxNuomerz'
+ startBlock: 82522
+```
+
+Here, `source.address` refers to the Deployment ID of the source Subgraph, and `startBlock` specifies the block from which indexing should begin.
+
+### Step 2. Define Handlers in Dependent Subgraph
+
+Below is an example of defining handlers in the dependent Subgraph:
+
+```typescript
+export function handleInitialize(trigger: EntityTrigger): void {
+ if (trigger.operation === EntityOp.Create) {
+ let entity = trigger.data
+ let poolAddressParam = Address.fromBytes(entity.poolAddress)
+
+ // Update pool sqrt price and tick
+ let pool = Pool.load(poolAddressParam.toHexString()) as Pool
+ pool.sqrtPrice = entity.sqrtPriceX96
+ pool.tick = BigInt.fromI32(entity.tick)
+ pool.save()
+
+ // Update token prices
+ let token0 = Token.load(pool.token0) as Token
+ let token1 = Token.load(pool.token1) as Token
+
+ // Update ETH price in USD
+ let bundle = Bundle.load('1') as Bundle
+ bundle.ethPriceUSD = getEthPriceInUSD()
+ bundle.save()
+
+ updatePoolDayData(entity)
+ updatePoolHourData(entity)
+
+ // Update derived ETH price for tokens
+ token0.derivedETH = findEthPerToken(token0)
+ token1.derivedETH = findEthPerToken(token1)
+ token0.save()
+ token1.save()
+ }
+}
+```
+
+In this example, the `handleInitialize` function is triggered when a new `Initialize` entity is created in the source Subgraph, passed as `EntityTrigger`. The handler updates the pool and token entities based on data from the new `Initialize` entity.
+
+`EntityTrigger` has three fields:
+
+1. `operation`: Specifies the operation type, which can be `Create`, `Modify`, or `Remove`.
+2. `type`: Indicates the entity type.
+3. `data`: Contains the entity data.
+
+Developers can then determine specific actions for the entity data based on the operation type.
+
+## Key Takeaways
+
+- Use this powerful tool to quickly scale your Subgraph development and reuse existing data.
+- The setup includes creating a base source Subgraph and referencing it in a dependent Subgraph.
+- You define handlers in the dependent Subgraph to perform actions based on changes in the source Subgraph's entities.
+
+This approach unlocks composability and scalability, simplifying both development and maintenance efficiency.
+
+## Další zdroje
+
+To use other advanced features in your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/) and [this Subgraph composition example repo](https://github.com/incrypto32/subgraph-composition-sample-subgraph).
+
+To learn how to define three source Subgraphs, check out [this Subgraph composition example repo](https://github.com/isum/subgraph-composition-example).
diff --git a/website/src/pages/cs/subgraphs/cookbook/subgraph-debug-forking.mdx b/website/src/pages/cs/subgraphs/cookbook/subgraph-debug-forking.mdx
index 4673b362c360..60ad21d2fe95 100644
--- a/website/src/pages/cs/subgraphs/cookbook/subgraph-debug-forking.mdx
+++ b/website/src/pages/cs/subgraphs/cookbook/subgraph-debug-forking.mdx
@@ -2,23 +2,23 @@
title: Rychlé a snadné ladění podgrafů pomocí vidliček
---
-As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging!
+As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging!
## Ok, co to je?
-**Subgraph forking** is the process of lazily fetching entities from _another_ subgraph's store (usually a remote one).
+**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one).
-In the context of debugging, **subgraph forking** allows you to debug your failed subgraph at block _X_ without needing to wait to sync-up to block _X_.
+In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_.
## Co?! Jak?
-When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_.
+When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_.
-In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state.
+In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state.
## Ukažte mi prosím nějaký kód!
-To stay focused on subgraph debugging, let's keep things simple and run along with the [example-subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract.
+To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract.
Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever:
@@ -44,22 +44,22 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void {
}
```
-Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error.
+Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error.
Obvyklý způsob, jak se pokusit o opravu, je:
1. Proveďte změnu ve zdroji mapování, která podle vás problém vyřeší (zatímco já vím, že ne).
-2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node).
+2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node).
3. Počkejte na synchronizaci.
4. Pokud se opět rozbije, vraťte se na 1, jinak: Hurá!
It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._
-Using **subgraph forking** we can essentially eliminate this step. Here is how it looks:
+Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks:
0. Spin-up a local Graph Node with the **_appropriate fork-base_** set.
1. Proveďte změnu ve zdroji mapování, která podle vás problém vyřeší.
-2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**.
+2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**.
3. Pokud se opět rozbije, vraťte se na 1, jinak: Hurá!
Nyní můžete mít 2 otázky:
@@ -69,18 +69,18 @@ Nyní můžete mít 2 otázky:
A já odpovídám:
-1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the subgraph's store.
+1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store.
2. Vidličkování je snadné, není třeba se potit:
```bash
$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020
```
-Also, don't forget to set the `dataSources.source.startBlock` field in the subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork!
+Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork!
Takže to dělám takhle:
-1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/).
+1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/).
```
$ cargo run -p graph-node --release -- \
@@ -91,11 +91,11 @@ $ cargo run -p graph-node --release -- \
```
2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex.
-3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`:
+3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`:
```bash
$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020
```
4. Zkontroluji protokoly vytvořené místním graf uzlem a hurá, zdá se, že vše funguje.
-5. Nasadím svůj nyní již bezchybný podgraf do vzdáleného uzlu Graf a žiji šťastně až do smrti! (bez brambor)
+5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho)
diff --git a/website/src/pages/cs/subgraphs/cookbook/subgraph-uncrashable.mdx b/website/src/pages/cs/subgraphs/cookbook/subgraph-uncrashable.mdx
index 53750dd1cbee..bdc3671399e1 100644
--- a/website/src/pages/cs/subgraphs/cookbook/subgraph-uncrashable.mdx
+++ b/website/src/pages/cs/subgraphs/cookbook/subgraph-uncrashable.mdx
@@ -2,23 +2,23 @@
title: Generátor kódu bezpečného podgrafu
---
-[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your subgraph are completely safe and consistent.
+[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent.
## Proč se integrovat s aplikací Subgraph Uncrashable?
-- **Continuous Uptime**. Mishandled entities may cause subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your subgraphs “uncrashable” and ensure business continuity.
+- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity.
-- **Completely Safe**. Common problems seen in subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic.
+- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic.
-- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy.
+- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy.
**Key Features**
-- The code generation tool accommodates **all** subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification.
+- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification.
- Framework také obsahuje způsob (prostřednictvím konfiguračního souboru), jak vytvořit vlastní, ale bezpečné funkce setteru pro skupiny proměnných entit. Tímto způsobem není možné, aby uživatel načetl/použil zastaralou entitu grafu, a také není možné zapomenout uložit nebo nastavit proměnnou, kterou funkce vyžaduje.
-- Warning logs are recorded as logs indicating where there is a breach of subgraph logic to help patch the issue to ensure data accuracy.
+- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy.
Podgraf Uncrashable lze spustit jako volitelný příznak pomocí příkazu Graph CLI codegen.
@@ -26,4 +26,4 @@ Podgraf Uncrashable lze spustit jako volitelný příznak pomocí příkazu Grap
graph codegen -u [options] []
```
-Visit the [subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer subgraphs.
+Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs.
diff --git a/website/src/pages/cs/subgraphs/cookbook/transfer-to-the-graph.mdx b/website/src/pages/cs/subgraphs/cookbook/transfer-to-the-graph.mdx
index 3e4f8eee8ccf..510b0ea317f6 100644
--- a/website/src/pages/cs/subgraphs/cookbook/transfer-to-the-graph.mdx
+++ b/website/src/pages/cs/subgraphs/cookbook/transfer-to-the-graph.mdx
@@ -1,14 +1,14 @@
---
-title: Tranfer to The Graph
+title: Transfer to The Graph
---
-Quickly upgrade your subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/).
+Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/).
## Benefits of Switching to The Graph
-- Use the same subgraph that your apps already use with zero-downtime migration.
+- Use the same Subgraph that your apps already use with zero-downtime migration.
- Increase reliability from a global network supported by 100+ Indexers.
-- Receive lightning-fast support for subgraphs 24/7, with an on-call engineering team.
+- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team.
## Upgrade Your Subgraph to The Graph in 3 Easy Steps
@@ -21,9 +21,9 @@ Quickly upgrade your subgraphs from any platform to [The Graph's decentralized n
### Create a Subgraph in Subgraph Studio
- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet.
-- Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name".
+- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name".
-> Note: After publishing, the subgraph name will be editable but requires onchain action each time, so name it properly.
+> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly.
### Install the Graph CLI
@@ -37,7 +37,7 @@ Použitím [npm](https://www.npmjs.com/):
npm install -g @graphprotocol/graph-cli@latest
```
-Use the following command to create a subgraph in Studio using the CLI:
+Use the following command to create a Subgraph in Studio using the CLI:
```sh
graph init --product subgraph-studio
@@ -53,7 +53,7 @@ graph auth
## 2. Deploy Your Subgraph to Studio
-If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your subgraph.
+If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph.
In The Graph CLI, run the following command:
@@ -62,7 +62,7 @@ graph deploy --ipfs-hash
```
-> **Note:** Every subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1).
+> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1).
## 3. Publish Your Subgraph to The Graph Network
@@ -70,17 +70,17 @@ graph deploy --ipfs-hash
### Query Your Subgraph
-> To attract about 3 indexers to query your subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph.
+> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph.
-You can start [querying](/subgraphs/querying/introduction/) any subgraph by sending a GraphQL query into the subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio.
+You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio.
#### Příklad
-[CryptoPunks Ethereum subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari:
+[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari:

-The query URL for this subgraph is:
+The query URL for this Subgraph is:
```sh
https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK
@@ -96,9 +96,9 @@ You can create API Keys in Subgraph Studio under the “API Keys” menu at the
### Monitor Subgraph Status
-Once you upgrade, you can access and manage your subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all subgraphs in [The Graph Explorer](https://thegraph.com/networks/).
+Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/).
### Další zdroje
-- To quickly create and publish a new subgraph, check out the [Quick Start](/subgraphs/quick-start/).
-- To explore all the ways you can optimize and customize your subgraph for a better performance, read more about [creating a subgraph here](/developing/creating-a-subgraph/).
+- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/).
+- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/).
diff --git a/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx b/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx
index 4fbf2b573c14..0ae33c1efe69 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/advanced.mdx
@@ -4,9 +4,9 @@ title: Advanced Subgraph Features
## Přehled
-Add and implement advanced subgraph features to enhanced your subgraph's built.
+Add and implement advanced Subgraph features to enhanced your Subgraph's built.
-Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below:
+Starting from `specVersion` `0.0.4`, Subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below:
| Feature | Name |
| ---------------------------------------------------- | ---------------- |
@@ -14,10 +14,10 @@ Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declar
| [Full-text Search](#defining-fulltext-search-fields) | `fullTextSearch` |
| [Grafting](#grafting-onto-existing-subgraphs) | `grafting` |
-For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be:
+For instance, if a Subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be:
```yaml
-specVersion: 0.0.4
+specVersion: 1.3.0
description: Gravatar for Ethereum
features:
- fullTextSearch
@@ -25,7 +25,7 @@ features:
dataSources: ...
```
-> Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used.
+> Note that using a feature without declaring it will incur a **validation error** during Subgraph deployment, but no errors will occur if a feature is declared but not used.
## Timeseries and Aggregations
@@ -33,9 +33,9 @@ Prerequisites:
- Subgraph specVersion must be ≥1.1.0.
-Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, and more.
+Timeseries and aggregations enable your Subgraph to track statistics like daily average price, hourly total transfers, and more.
-This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL.
+This feature introduces two new types of Subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL.
### Example Schema
@@ -97,21 +97,21 @@ Aggregation entities are automatically calculated on the basis of the specified
## Nefatální
-Chyby indexování v již synchronizovaných podgrafech ve výchozím nastavení způsobí selhání podgrafy a zastavení synchronizace. Podgrafy lze alternativně nakonfigurovat tak, aby pokračovaly v synchronizaci i při přítomnosti chyb, a to ignorováním změn provedených obslužnou rutinou, která chybu vyvolala. To dává autorům podgrafů čas na opravu jejich podgrafů, zatímco dotazy jsou nadále obsluhovány proti poslednímu bloku, ačkoli výsledky mohou být nekonzistentní kvůli chybě, která chybu způsobila. Všimněte si, že některé chyby jsou stále fatální. Aby chyba nebyla fatální, musí být známo, že je deterministická.
+Indexing errors on already synced Subgraphs will, by default, cause the Subgraph to fail and stop syncing. Subgraphs can alternatively be configured to continue syncing in the presence of errors, by ignoring the changes made by the handler which provoked the error. This gives Subgraph authors time to correct their Subgraphs while queries continue to be served against the latest block, though the results might be inconsistent due to the bug that caused the error. Note that some errors are still always fatal. To be non-fatal, the error must be known to be deterministic.
-> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio.
+> **Note:** The Graph Network does not yet support non-fatal errors, and developers should not deploy Subgraphs using that functionality to the network via the Studio.
-Povolení nefatálních chyb vyžaduje nastavení následujícího příznaku funkce v manifestu podgraf:
+Enabling non-fatal errors requires setting the following feature flag on the Subgraph manifest:
```yaml
-specVersion: 0.0.4
+specVersion: 1.3.0
description: Gravatar for Ethereum
features:
- nonFatalErrors
...
```
-The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the subgraph has skipped over errors, as in the example:
+The query must also opt-in to querying data with potential inconsistencies through the `subgraphError` argument. It is also recommended to query `_meta` to check if the Subgraph has skipped over errors, as in the example:
```graphql
foos(first: 100, subgraphError: allow) {
@@ -123,7 +123,7 @@ _meta {
}
```
-If the subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response:
+If the Subgraph encounters an error, that query will return both the data and a graphql error with the message `"indexing_error"`, as in this example response:
```graphql
"data": {
@@ -145,7 +145,7 @@ If the subgraph encounters an error, that query will return both the data and a
## IPFS/Arweave File Data Sources
-Zdroje dat souborů jsou novou funkcí podgrafu pro přístup k datům mimo řetězec během indexování robustním a rozšiřitelným způsobem. Zdroje souborových dat podporují načítání souborů ze systému IPFS a z Arweave.
+File data sources are a new Subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave.
> To také vytváří základ pro deterministické indexování dat mimo řetězec a potenciální zavedení libovolných dat ze zdrojů HTTP.
@@ -221,7 +221,7 @@ templates:
- name: TokenMetadata
kind: file/ipfs
mapping:
- apiVersion: 0.0.7
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/mapping.ts
handler: handleMetadata
@@ -246,7 +246,7 @@ The CID of the file as a readable string can be accessed via the `dataSource` as
const cid = dataSource.stringParam()
```
-Příklad
+Příklad
```typescript
import { json, Bytes, dataSource } from '@graphprotocol/graph-ts'
@@ -290,7 +290,7 @@ Příklad:
import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates'
const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm'
-//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs.
+//This example code is for a Crypto coven Subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs.
export function handleTransfer(event: TransferEvent): void {
let token = Token.load(event.params.tokenId.toString())
@@ -317,23 +317,23 @@ Tím se vytvoří nový zdroj dat souborů, který bude dotazovat nakonfigurovan
This example is using the CID as the lookup between the parent `Token` entity and the resulting `TokenMetadata` entity.
-> Previously, this is the point at which a subgraph developer would have called `ipfs.cat(CID)` to fetch the file
+> Previously, this is the point at which a Subgraph developer would have called `ipfs.cat(CID)` to fetch the file
Gratulujeme, používáte souborové zdroje dat!
-#### Nasazení podgrafů
+#### Deploying your Subgraphs
-You can now `build` and `deploy` your subgraph to any Graph Node >=v0.30.0-rc.0.
+You can now `build` and `deploy` your Subgraph to any Graph Node >=v0.30.0-rc.0.
#### Omezení
-Zpracovatelé a entity zdrojů dat souborů jsou izolovány od ostatních entit podgrafů, což zajišťuje, že jsou při provádění deterministické a nedochází ke kontaminaci zdrojů dat založených na řetězci. Přesněji řečeno:
+File data source handlers and entities are isolated from other Subgraph entities, ensuring that they are deterministic when executed, and ensuring no contamination of chain-based data sources. To be specific:
- Entity vytvořené souborovými zdroji dat jsou neměnné a nelze je aktualizovat
- Obsluhy zdrojů dat souborů nemohou přistupovat k entita z jiných zdrojů dat souborů
- K entita přidruženým k datovým zdrojům souborů nelze přistupovat pomocí zpracovatelů založených na řetězci
-> Ačkoli by toto omezení nemělo být pro většinu případů použití problematické, pro některé může představovat složitost. Pokud máte problémy s modelováním dat založených na souborech v podgrafu, kontaktujte nás prosím prostřednictvím služby Discord!
+> While this constraint should not be problematic for most use-cases, it may introduce complexity for some. Please get in touch via Discord if you are having issues modelling your file-based data in a Subgraph!
Kromě toho není možné vytvářet zdroje dat ze zdroje dat souborů, ať už se jedná o zdroj dat v řetězci nebo jiný zdroj dat souborů. Toto omezení může být v budoucnu zrušeno.
@@ -365,15 +365,15 @@ Handlers for File Data Sources cannot be in files which import `eth_call` contra
> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0`
-Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments.
+Topic filters, also known as indexed argument filters, are a powerful feature in Subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments.
-- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data.
+- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing Subgraphs to operate more efficiently by focusing only on relevant data.
-- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain.
+- This is useful for creating personal Subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain.
### How Topic Filters Work
-When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments.
+When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a Subgraph's manifest. This allows the Subgraph to listen selectively for events that match these indexed arguments.
- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event.
@@ -401,7 +401,7 @@ In this example:
#### Configuration in Subgraphs
-Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured:
+Topic filters are defined directly within the event handler configuration in the Subgraph manifest. Here is how they are configured:
```yaml
eventHandlers:
@@ -436,7 +436,7 @@ In this configuration:
- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender.
- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver.
-- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`.
+- The Subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`.
#### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses
@@ -452,17 +452,17 @@ In this configuration:
- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender.
- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver.
-- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses.
+- The Subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses.
## Declared eth_call
> Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node.
-Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel.
+Declarative `eth_calls` are a valuable Subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel.
This feature does the following:
-- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency.
+- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the Subgraph's overall efficiency.
- Allows faster data fetching, resulting in quicker query responses and a better user experience.
- Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient.
@@ -474,7 +474,7 @@ This feature does the following:
#### Scenario without Declarative `eth_calls`
-Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings.
+Imagine you have a Subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings.
Traditionally, these calls might be made sequentially:
@@ -498,15 +498,15 @@ Total time taken = max (3, 2, 4) = 4 seconds
#### How it Works
-1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel.
+1. Declarative Definition: In the Subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel.
2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously.
-3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing.
+3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the Subgraph for further processing.
#### Example Configuration in Subgraph Manifest
Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`.
-`Subgraph.yaml` using `event.address`:
+`subgraph.yaml` using `event.address`:
```yaml
eventHandlers:
@@ -524,7 +524,7 @@ Details for the example above:
- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)`
- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed.
-`Subgraph.yaml` using `event.params`
+`subgraph.yaml` using `event.params`
```yaml
calls:
@@ -535,22 +535,22 @@ calls:
> **Note:** it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more [here](/subgraphs/cookbook/grafting/#important-note-on-grafting-when-upgrading-to-the-network).
-When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed.
+When a Subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the `startBlock` defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing Subgraph and start indexing at a much later block. This mode of indexing is called _Grafting_. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed.
-A subgraph is grafted onto a base subgraph when the subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level:
+A Subgraph is grafted onto a base Subgraph when the Subgraph manifest in `subgraph.yaml` contains a `graft` block at the top-level:
```yaml
description: ...
graft:
- base: Qm... # Subgraph ID of base subgraph
+ base: Qm... # Subgraph ID of base Subgraph
block: 7345624 # Block number
```
-When a subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` subgraph up to and including the given `block` and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph.
+When a Subgraph whose manifest contains a `graft` block is deployed, Graph Node will copy the data of the `base` Subgraph up to and including the given `block` and then continue indexing the new Subgraph from that block on. The base Subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted Subgraph.
-Protože se při roubování základní data spíše kopírují než indexují, je mnohem rychlejší dostat podgraf do požadovaného bloku než při indexování od nuly, i když počáteční kopírování dat může u velmi velkých podgrafů trvat i několik hodin. Během inicializace roubovaného podgrafu bude uzel Graf Uzel zaznamenávat informace o typů entit, které již byly zkopírovány.
+Because grafting copies rather than indexes base data, it is much quicker to get the Subgraph to the desired block than indexing from scratch, though the initial data copy can still take several hours for very large Subgraphs. While the grafted Subgraph is being initialized, the Graph Node will log information about the entity types that have already been copied.
-Štěpovaný podgraf může používat schéma GraphQL, které není totožné se schématem základního podgrafu, ale je s ním pouze kompatibilní. Musí to být platné schéma podgrafu jako takové, ale může se od schématu základního podgrafu odchýlit následujícími způsoby:
+The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways:
- Přidává nebo odebírá typy entit
- Odstraňuje atributy z typů entit
@@ -560,4 +560,4 @@ Protože se při roubování základní data spíše kopírují než indexují,
- Přidává nebo odebírá rozhraní
- Mění se, pro které typy entit je rozhraní implementováno
-> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the subgraph manifest.
+> **[Feature Management](#experimental-features):** `grafting` must be declared under `features` in the Subgraph manifest.
diff --git a/website/src/pages/cs/subgraphs/developing/creating/assemblyscript-mappings.mdx b/website/src/pages/cs/subgraphs/developing/creating/assemblyscript-mappings.mdx
index fad0d6ebaa1a..00fb7cbcf275 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/assemblyscript-mappings.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/assemblyscript-mappings.mdx
@@ -10,7 +10,7 @@ The mappings take data from a particular source and transform it into entities t
For each event handler that is defined in `subgraph.yaml` under `mapping.eventHandlers`, create an exported function of the same name. Each handler must accept a single parameter called `event` with a type corresponding to the name of the event which is being handled.
-In the example subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events:
+In the example Subgraph, `src/mapping.ts` contains handlers for the `NewGravatar` and `UpdatedGravatar` events:
```javascript
import { NewGravatar, UpdatedGravatar } from '../generated/Gravity/Gravity'
@@ -72,7 +72,7 @@ Pokud není pro pole v nové entitě se stejným ID nastavena žádná hodnota,
## Generování kódu
-Aby byla práce s inteligentními smlouvami, událostmi a entitami snadná a typově bezpečná, může Graf CLI generovat typy AssemblyScript ze schématu GraphQL podgrafu a ABI smluv obsažených ve zdrojích dat.
+In order to make it easy and type-safe to work with smart contracts, events and entities, the Graph CLI can generate AssemblyScript types from the Subgraph's GraphQL schema and the contract ABIs included in the data sources.
To se provádí pomocí
@@ -80,7 +80,7 @@ To se provádí pomocí
graph codegen [--output-dir ] []
```
-but in most cases, subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same:
+but in most cases, Subgraphs are already preconfigured via `package.json` to allow you to simply run one of the following to achieve the same:
```sh
# Yarn
@@ -90,7 +90,7 @@ yarn codegen
npm run codegen
```
-This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with.
+This will generate an AssemblyScript class for every smart contract in the ABI files mentioned in `subgraph.yaml`, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `//.ts`. In the example Subgraph, this would be `generated/Gravity/Gravity.ts`, allowing mappings to import these types with.
```javascript
import {
@@ -102,12 +102,12 @@ import {
} from '../generated/Gravity/Gravity'
```
-In addition to this, one class is generated for each entity type in the subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with
+In addition to this, one class is generated for each entity type in the Subgraph's GraphQL schema. These classes provide type-safe entity loading, read and write access to entity fields as well as a `save()` method to write entities to store. All entity classes are written to `/schema.ts`, allowing mappings to import them with
```javascript
import { Gravatar } from '../generated/schema'
```
-> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the subgraph.
+> **Note:** The code generation must be performed again after every change to the GraphQL schema or the ABIs included in the manifest. It must also be performed at least once before building or deploying the Subgraph.
-Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find.
+Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your Subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find.
diff --git a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md
index 5d90888ac378..5f964d3cbb78 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md
+++ b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/CHANGELOG.md
@@ -1,5 +1,11 @@
# @graphprotocol/graph-ts
+## 0.38.0
+
+### Minor Changes
+
+- [#1935](https://github.com/graphprotocol/graph-tooling/pull/1935) [`0c36a02`](https://github.com/graphprotocol/graph-tooling/commit/0c36a024e0516bbf883ae62b8312dba3d9945f04) Thanks [@isum](https://github.com/isum)! - feat: add yaml parsing support to mappings
+
## 0.37.0
### Minor Changes
diff --git a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx
index 3c3dbdc7671f..e794c1caa32c 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/api.mdx
@@ -2,12 +2,12 @@
title: AssemblyScript API
---
-> Note: If you created a subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/).
+> Note: If you created a Subgraph prior to `graph-cli`/`graph-ts` version `0.22.0`, then you're using an older version of AssemblyScript. It is recommended to review the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/).
-Learn what built-in APIs can be used when writing subgraph mappings. There are two kinds of APIs available out of the box:
+Learn what built-in APIs can be used when writing Subgraph mappings. There are two kinds of APIs available out of the box:
- The [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`)
-- Code generated from subgraph files by `graph codegen`
+- Code generated from Subgraph files by `graph codegen`
You can also add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript).
@@ -27,18 +27,18 @@ Knihovna `@graphprotocol/graph-ts` poskytuje následující API:
### Verze
-`apiVersion` v manifestu podgrafu určuje verzi mapovacího API, kterou pro daný podgraf používá uzel Graf.
+The `apiVersion` in the Subgraph manifest specifies the mapping API version which is run by Graph Node for a given Subgraph.
-| Verze | Poznámky vydání |
-| :-: | --- |
-| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
-| 0.0.8 | Přidá ověření existence polí ve schéma při ukládání entity. |
-| 0.0.7 | Přidání tříd `TransactionReceipt` a `Log` do typů Ethereum Přidání pole `receipt` do objektu Ethereum událost |
-| 0.0.6 | Přidáno pole `nonce` do objektu Ethereum Transaction Přidáno `baseFeePerGas` do objektu Ethereum bloku |
+| Verze | Poznámky vydání |
+| :---: | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
+| 0.0.8 | Přidá ověření existence polí ve schéma při ukládání entity. |
+| 0.0.7 | Přidání tříd `TransactionReceipt` a `Log` do typů Ethereum Přidání pole `receipt` do objektu Ethereum událost |
+| 0.0.6 | Přidáno pole `nonce` do objektu Ethereum Transaction Přidáno `baseFeePerGas` do objektu Ethereum bloku |
| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/resources/migration-guides/assemblyscript-migration-guide/)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
-| 0.0.4 | Přidání pole `functionSignature` do objektu Ethereum SmartContractCall |
-| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
-| 0.0.2 | Přidání pole `input` do objektu Ethereum Transackce |
+| 0.0.4 | Přidání pole `functionSignature` do objektu Ethereum SmartContractCall |
+| 0.0.3 | Added `from` field to the Ethereum Call object `ethereum.call.address` renamed to `ethereum.call.to` |
+| 0.0.2 | Přidání pole `input` do objektu Ethereum Transackce |
### Vestavěné typy
@@ -147,7 +147,7 @@ _Math_
- `x.notEqual(y: BigInt): bool` –lze zapsat jako `x != y`.
- `x.lt(y: BigInt): bool` – lze zapsat jako `x < y`.
- `x.le(y: BigInt): bool` – lze zapsat jako `x <= y`.
-- `x.gt(y: BigInt): bool` – lze zapsat jako `x > y`.
+- `x.gt(y: BigInt): bool` – lze zapsat jako `x > y`.
- `x.ge(y: BigInt): bool` – lze zapsat jako `x >= y`.
- `x.neg(): BigInt` – lze zapsat jako `-x`.
- `x.divDecimal(y: BigDecimal): BigDecimal` – dělí desetinným číslem, čímž získá desetinný výsledek.
@@ -223,7 +223,7 @@ import { store } from '@graphprotocol/graph-ts'
API `store` umožňuje načítat, ukládat a odebírat entity z a do úložiště Graf uzel.
-Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities.
+Entities written to the store map one-to-one to the `@entity` types defined in the Subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities.
#### Vytváření entity
@@ -282,8 +282,8 @@ Od verzí `graph-node` v0.31.0, `@graphprotocol/graph-ts` v0.30.0 a `@graphproto
The store API facilitates the retrieval of entities that were created or updated in the current block. A typical situation for this is that one handler creates a transaction from some onchain event, and a later handler wants to access this transaction if it exists.
-- In the case where the transaction does not exist, the subgraph will have to go to the database simply to find out that the entity does not exist. If the subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip.
-- For some subgraphs, these missed lookups can contribute significantly to the indexing time.
+- In the case where the transaction does not exist, the Subgraph will have to go to the database simply to find out that the entity does not exist. If the Subgraph author already knows that the entity must have been created in the same block, using `loadInBlock` avoids this database roundtrip.
+- For some Subgraphs, these missed lookups can contribute significantly to the indexing time.
```typescript
let id = event.transaction.hash // or however the ID is constructed
@@ -380,11 +380,11 @@ Ethereum API poskytuje přístup k inteligentním smlouvám, veřejným stavový
#### Podpora typů Ethereum
-Stejně jako u entit generuje `graph codegen` třídy pro všechny inteligentní smlouvy a události používané v podgrafu. Za tímto účelem musí být ABI kontraktu součástí zdroje dat v manifestu podgrafu. Obvykle jsou soubory ABI uloženy ve složce `abis/`.
+As with entities, `graph codegen` generates classes for all smart contracts and events used in a Subgraph. For this, the contract ABIs need to be part of the data source in the Subgraph manifest. Typically, the ABI files are stored in an `abis/` folder.
-Ve vygenerovaných třídách probíhají konverze mezi typy Ethereum [built-in-types](#built-in-types) v pozadí, takže se o ně autoři podgraf nemusí starat.
+With the generated classes, conversions between Ethereum types and the [built-in types](#built-in-types) take place behind the scenes so that Subgraph authors do not have to worry about them.
-To ilustruje následující příklad. Je dáno schéma podgrafu, jako je
+The following example illustrates this. Given a Subgraph schema like
```graphql
type Transfer @entity {
@@ -483,7 +483,7 @@ class Log {
#### Přístup ke stavu inteligentní smlouvy
-Kód vygenerovaný nástrojem `graph codegen` obsahuje také třídy pro inteligentní smlouvy používané v podgrafu. Ty lze použít k přístupu k veřejným stavovým proměnným a k volání funkcí kontraktu v aktuálním bloku.
+The code generated by `graph codegen` also includes classes for the smart contracts used in the Subgraph. These can be used to access public state variables and call functions of the contract at the current block.
Běžným vzorem je přístup ke smlouvě, ze které událost pochází. Toho lze dosáhnout pomocí následujícího kódu:
@@ -506,7 +506,7 @@ export function handleTransfer(event: TransferEvent) {
Pokud má smlouva `ERC20Contract` na platformě Ethereum veřejnou funkci pouze pro čtení s názvem `symbol`, lze ji volat pomocí `.symbol()`. Pro veřejné stavové proměnné se automaticky vytvoří metoda se stejným názvem.
-Jakákoli jiná smlouva, která je součástí podgrafu, může být importována z vygenerovaného kódu a může být svázána s platnou adresou.
+Any other contract that is part of the Subgraph can be imported from the generated code and can be bound to a valid address.
#### Zpracování vrácených volání
@@ -582,7 +582,7 @@ let isContract = ethereum.hasCode(eoa).inner // returns false
import { log } from '@graphprotocol/graph-ts'
```
-The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument.
+The `log` API allows Subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument.
`log` API obsahuje následující funkce:
@@ -590,7 +590,7 @@ The `log` API allows subgraphs to log information to the Graph Node standard out
- `log.info(fmt: string, args: Array): void` - zaznamená informační zprávu.
- `log.warning(fmt: string, args: Array): void` - zaznamená varování.
- `log.error(fmt: string, args: Array): void` - zaznamená chybovou zprávu.
-- `log.critical(fmt: string, args: Array): void` - zaznamená kritickou zprávu _a_ ukončí podgraf.
+- `log.critical(fmt: string, args: Array): void` – logs a critical message _and_ terminates the Subgraph.
`log` API přebírá formátovací řetězec a pole řetězcových hodnot. Poté nahradí zástupné symboly řetězcovými hodnotami z pole. První zástupný symbol „{}“ bude nahrazen první hodnotou v poli, druhý zástupný symbol „{}“ bude nahrazen druhou hodnotou a tak dále.
@@ -721,7 +721,7 @@ ipfs.mapJSON('Qm...', 'processItem', Value.fromString('parentId'))
V současné době je podporován pouze příznak `json`, který musí být předán souboru `ipfs.map`. S příznakem `json` se soubor IPFS musí skládat z řady hodnot JSON, jedna hodnota na řádek. Volání příkazu `ipfs.map` přečte každý řádek souboru, deserializuje jej do hodnoty `JSONValue` a pro každou z nich zavolá zpětné volání. Zpětné volání pak může použít operace entit k uložení dat z `JSONValue`. Změny entit se uloží až po úspěšném ukončení obsluhy, která volala `ipfs.map`; do té doby se uchovávají v paměti, a velikost souboru, který může `ipfs.map` zpracovat, je proto omezená.
-Při úspěchu vrátí `ipfs.map` hodnotu `void`. Pokud vyvolání zpětného volání způsobí chybu, obslužná rutina, která vyvolala `ipfs.map`, se přeruší a podgraf se označí jako neúspěšný.
+On success, `ipfs.map` returns `void`. If any invocation of the callback causes an error, the handler that invoked `ipfs.map` is aborted, and the Subgraph is marked as failed.
### Crypto API
@@ -836,7 +836,7 @@ Základní třída `Entity` a podřízená třída `DataSourceContext` mají pom
### DataSourceContext v manifestu
-Sekce `context` v rámci `dataSources` umožňuje definovat páry klíč-hodnota, které jsou přístupné v rámci mapování podgrafů. Dostupné typy jsou `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List` a `BigInt`.
+The `context` section within `dataSources` allows you to define key-value pairs that are accessible within your Subgraph mappings. The available types are `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`.
Zde je příklad YAML ilustrující použití různých typů v sekci `context`:
@@ -887,4 +887,4 @@ dataSources:
- `Seznam`: Určuje seznam položek. U každé položky je třeba zadat její typ a data.
- `BigInt`: Určuje velkou celočíselnou hodnotu. Kvůli velké velikosti musí být uvedena v uvozovkách.
-Tento kontext je pak přístupný v souborech mapování podgrafů, což umožňuje vytvářet dynamičtější a konfigurovatelnější podgrafy.
+This context is then accessible in your Subgraph mapping files, enabling more dynamic and configurable Subgraphs.
diff --git a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/common-issues.mdx b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/common-issues.mdx
index 79ec3df1a827..419f698e68e4 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/graph-ts/common-issues.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/graph-ts/common-issues.mdx
@@ -2,7 +2,7 @@
title: Běžné problémy se AssemblyScript
---
-Při vývoji podgrafů se často vyskytují určité problémy [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Jejich obtížnost při ladění je různá, nicméně jejich znalost může pomoci. Následuje neúplný seznam těchto problémů:
+There are certain [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) issues that are common to run into during Subgraph development. They range in debug difficulty, however, being aware of them may help. The following is a non-exhaustive list of these issues:
- `Private` class variables are not enforced in [AssemblyScript](https://www.assemblyscript.org/status.html#language-features). There is no way to protect class variables from being directly changed from the class object.
- Rozsah se nedědí do [uzavíracích funkcí](https://www.assemblyscript.org/status.html#on-closures), tj. proměnné deklarované mimo uzavírací funkce nelze použít. Vysvětlení v [Developer Highlights #3](https://www.youtube.com/watch?v=1-8AW-lVfrA&t=3243s).
diff --git a/website/src/pages/cs/subgraphs/developing/creating/install-the-cli.mdx b/website/src/pages/cs/subgraphs/developing/creating/install-the-cli.mdx
index dbeac0c137a5..536b416c9465 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/install-the-cli.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/install-the-cli.mdx
@@ -2,11 +2,11 @@
title: Instalace Graf CLI
---
-> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/).
+> In order to use your Subgraph on The Graph's decentralized network, you will need to [create an API key](/resources/subgraph-studio-faq/#2-how-do-i-create-an-api-key) in [Subgraph Studio](https://thegraph.com/studio/apikeys/). It is recommended that you add signal to your Subgraph with at least 3,000 GRT to attract 2-3 Indexers. To learn more about signaling, check out [curating](/resources/roles/curating/).
## Přehled
-The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network.
+The [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a command-line interface that facilitates developers' commands for The Graph. It processes a [Subgraph manifest](/subgraphs/developing/creating/subgraph-manifest/) and compiles the [mappings](/subgraphs/developing/creating/assemblyscript-mappings/) to create the files you will need to deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) and the network.
## Začínáme
@@ -28,13 +28,13 @@ npm install -g @graphprotocol/graph-cli@latest
yarn global add @graphprotocol/graph-cli
```
-The `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new subgraph from that contract to get started.
+The `graph init` command can be used to set up a new Subgraph project, either from an existing contract or from an example Subgraph. If you already have a smart contract deployed to your preferred network, you can bootstrap a new Subgraph from that contract to get started.
## Vytvoření podgrafu
### Ze stávající smlouvy
-The following command creates a subgraph that indexes all events of an existing contract:
+The following command creates a Subgraph that indexes all events of an existing contract:
```sh
graph init \
@@ -51,25 +51,25 @@ graph init \
- If any of the optional arguments are missing, it guides you through an interactive form.
-- The `` is the ID of your subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your subgraph details page.
+- The `` is the ID of your Subgraph in [Subgraph Studio](https://thegraph.com/studio/). It can be found on your Subgraph details page.
### Z příkladu podgrafu
-The following command initializes a new project from an example subgraph:
+The following command initializes a new project from an example Subgraph:
```sh
graph init --from-example=example-subgraph
```
-- The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated.
+- The [example Subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant, which manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated.
-- The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events.
+- The Subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events.
### Add New `dataSources` to an Existing Subgraph
-`dataSources` are key components of subgraphs. They define the sources of data that the subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them.
+`dataSources` are key components of Subgraphs. They define the sources of data that the Subgraph indexes and processes. A `dataSource` specifies which smart contract to listen to, which events to process, and how to handle them.
-Recent versions of the Graph CLI supports adding new `dataSources` to an existing subgraph through the `graph add` command:
+Recent versions of the Graph CLI supports adding new `dataSources` to an existing Subgraph through the `graph add` command:
```sh
graph add []
@@ -101,19 +101,5 @@ The `graph add` command will fetch the ABI from Etherscan (unless an ABI path is
Soubor(y) ABI se musí shodovat s vaší smlouvou. Soubory ABI lze získat několika způsoby:
- Pokud vytváříte vlastní projekt, budete mít pravděpodobně přístup k nejaktuálnějším ABI.
-- If you are building a subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile.
-- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your subgraph will fail.
-
-## SpecVersion Releases
-
-| Verze | Poznámky vydání |
-| :-: | --- |
-| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
-| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
-| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs |
-| 0.0.9 | Supports `endBlock` feature |
-| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
-| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
-| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
-| 0.0.5 | Added support for event handlers having access to transaction receipts. |
-| 0.0.4 | Added support for managing subgraph features. |
+- If you are building a Subgraph for a public project, you can download that project to your computer and get the ABI by using [`npx hardhat compile`](https://hardhat.org/hardhat-runner/docs/guides/compile-contracts#compiling-your-contracts) or using `solc` to compile.
+- You can also find the ABI on [Etherscan](https://etherscan.io/), but this isn't always reliable, as the ABI that is uploaded there may be out of date. Make sure you have the right ABI, otherwise running your Subgraph will fail.
diff --git a/website/src/pages/cs/subgraphs/developing/creating/ql-schema.mdx b/website/src/pages/cs/subgraphs/developing/creating/ql-schema.mdx
index c0a99bb516eb..dcc831244293 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/ql-schema.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/ql-schema.mdx
@@ -4,7 +4,7 @@ title: The Graph QL Schema
## Přehled
-The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language.
+The schema for your Subgraph is in the file `schema.graphql`. GraphQL schemas are defined using the GraphQL interface definition language.
> Note: If you've never written a GraphQL schema, it is recommended that you check out this primer on the GraphQL type system. Reference documentation for GraphQL schemas can be found in the [GraphQL API](/subgraphs/querying/graphql-api/) section.
@@ -12,7 +12,7 @@ The schema for your subgraph is in the file `schema.graphql`. GraphQL schemas ar
Before defining entities, it is important to take a step back and think about how your data is structured and linked.
-- All queries will be made against the data model defined in the subgraph schema. As a result, the design of the subgraph schema should be informed by the queries that your application will need to perform.
+- All queries will be made against the data model defined in the Subgraph schema. As a result, the design of the Subgraph schema should be informed by the queries that your application will need to perform.
- It may be useful to imagine entities as "objects containing data", rather than as events or functions.
- You define entity types in `schema.graphql`, and Graph Node will generate top-level fields for querying single instances and collections of that entity type.
- Each type that should be an entity is required to be annotated with an `@entity` directive.
@@ -72,16 +72,16 @@ For some entity types the `id` for `Bytes!` is constructed from the id's of two
The following scalars are supported in the GraphQL API:
-| Typ | Popis |
-| --- | --- |
-| `Bytes` | Pole bajtů reprezentované jako hexadecimální řetězec. Běžně se používá pro hashe a adresy Ethereum. |
-| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. |
-| `Boolean` | Scalar for `boolean` values. |
-| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. |
-| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. |
-| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. |
-| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. |
-| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. |
+| Typ | Popis |
+| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `Bytes` | Pole bajtů reprezentované jako hexadecimální řetězec. Běžně se používá pro hashe a adresy Ethereum. |
+| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. |
+| `Boolean` | Scalar for `boolean` values. |
+| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. |
+| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. |
+| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. |
+| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. |
+| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. |
### Enums
@@ -141,7 +141,7 @@ type TokenBalance @entity {
Reverse lookups can be defined on an entity through the `@derivedFrom` field. This creates a virtual field on the entity that may be queried but cannot be set manually through the mappings API. Rather, it is derived from the relationship defined on the other entity. For such relationships, it rarely makes sense to store both sides of the relationship, and both indexing and query performance will be better when only one side is stored and the other is derived.
-U vztahů typu "jeden k mnoha" by měl být vztah vždy uložen na straně "jeden" a strana "mnoho" by měla být vždy odvozena. Uložení vztahu tímto způsobem namísto uložení pole entit na straně "mnoho" povede k výrazně lepšímu výkonu jak při indexování, tak při dotazování na podgraf. Obecně platí, že ukládání polí entit je třeba se vyhnout, pokud je to praktické.
+For one-to-many relationships, the relationship should always be stored on the 'one' side, and the 'many' side should always be derived. Storing the relationship this way, rather than storing an array of entities on the 'many' side, will result in dramatically better performance for both indexing and querying the Subgraph. In general, storing arrays of entities should be avoided as much as is practical.
#### Příklad
@@ -160,7 +160,7 @@ type TokenBalance @entity {
}
```
-Here is an example of how to write a mapping for a subgraph with reverse lookups:
+Here is an example of how to write a mapping for a Subgraph with reverse lookups:
```typescript
let token = new Token(event.address) // Create Token
@@ -231,7 +231,7 @@ query usersWithOrganizations {
}
```
-Tento propracovanější způsob ukládání vztahů mnoho-více vede k menšímu množství dat uložených pro podgraf, a tedy k podgrafu, který je často výrazně rychlejší při indexování a dotazování.
+This more elaborate way of storing many-to-many relationships will result in less data stored for the Subgraph, and therefore to a Subgraph that is often dramatically faster to index and to query.
### Přidání komentářů do schématu
@@ -287,7 +287,7 @@ query {
}
```
-> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the subgraph manifest.
+> **[Feature Management](#experimental-features):** From `specVersion` `0.0.4` and onwards, `fullTextSearch` must be declared under the `features` section in the Subgraph manifest.
## Podporované jazyky
diff --git a/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx
index 436b407a19ba..04f1eee28246 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/starting-your-subgraph.mdx
@@ -4,20 +4,32 @@ title: Starting Your Subgraph
## Přehled
-The Graph is home to thousands of subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs.
+The Graph is home to thousands of Subgraphs already available for query, so check [The Graph Explorer](https://thegraph.com/explorer) and find one that already matches your needs.
-When you create a [subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL.
+When you create a [Subgraph](/subgraphs/developing/subgraphs/), you create a custom open API that extracts data from a blockchain, processes it, stores it, and makes it easy to query via GraphQL.
-Subgraph development ranges from simple scaffold subgraphs to advanced, specifically tailored subgraphs.
+Subgraph development ranges from simple scaffold Subgraphs to advanced, specifically tailored Subgraphs.
### Start Building
-Start the process and build a subgraph that matches your needs:
+Start the process and build a Subgraph that matches your needs:
1. [Install the CLI](/subgraphs/developing/creating/install-the-cli/) - Set up your infrastructure
-2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a subgraph's key component
+2. [Subgraph Manifest](/subgraphs/developing/creating/subgraph-manifest/) - Understand a Subgraph's key component
3. [The GraphQL Schema](/subgraphs/developing/creating/ql-schema/) - Write your schema
4. [Writing AssemblyScript Mappings](/subgraphs/developing/creating/assemblyscript-mappings/) - Write your mappings
-5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your subgraph with advanced features
+5. [Advanced Features](/subgraphs/developing/creating/advanced/) - Customize your Subgraph with advanced features
Explore additional [resources for APIs](/subgraphs/developing/creating/graph-ts/README/) and conduct local testing with [Matchstick](/subgraphs/developing/creating/unit-testing-framework/).
+
+| Verze | Poznámky vydání |
+| :---: | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune Subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
+| 0.0.8 | Added support for polling [Block Handlers](/developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](/developing/creating-a-subgraph/#once-filter). |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating-a-subgraph/#file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/cs/subgraphs/developing/creating/subgraph-manifest.mdx b/website/src/pages/cs/subgraphs/developing/creating/subgraph-manifest.mdx
index a434110b4282..d86f86f9381c 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/subgraph-manifest.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/subgraph-manifest.mdx
@@ -4,19 +4,19 @@ title: Subgraph Manifest
## Přehled
-The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query.
+The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query.
-The **subgraph definition** consists of the following files:
+The **Subgraph definition** consists of the following files:
-- `subgraph.yaml`: Contains the subgraph manifest
+- `subgraph.yaml`: Contains the Subgraph manifest
-- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL
+- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL
- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema (e.g. `mapping.ts` in this guide)
### Subgraph Capabilities
-A single subgraph can:
+A single Subgraph can:
- Index data from multiple smart contracts (but not multiple networks).
@@ -24,12 +24,12 @@ A single subgraph can:
- Add an entry for each contract that requires indexing to the `dataSources` array.
-The full specification for subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md).
+The full specification for Subgraph manifests can be found [here](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md).
-For the example subgraph listed above, `subgraph.yaml` is:
+For the example Subgraph listed above, `subgraph.yaml` is:
```yaml
-specVersion: 0.0.4
+specVersion: 1.3.0
description: Gravatar for Ethereum
repository: https://github.com/graphprotocol/graph-tooling
schema:
@@ -54,7 +54,7 @@ dataSources:
data: 'bar'
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Gravatar
@@ -79,47 +79,47 @@ dataSources:
## Subgraph Entries
-> Important Note: Be sure you populate your subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/).
+> Important Note: Be sure you populate your Subgraph manifest with all handlers and [entities](/subgraphs/developing/creating/ql-schema/).
Důležité položky, které je třeba v manifestu aktualizovat, jsou:
-- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases.
+- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the Subgraph. The latest version is `1.3.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases.
-- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio.
+- `description`: a human-readable description of what the Subgraph is. This description is displayed in Graph Explorer when the Subgraph is deployed to Subgraph Studio.
-- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer.
+- `repository`: the URL of the repository where the Subgraph manifest can be found. This is also displayed in Graph Explorer.
- `features`: a list of all used [feature](#experimental-features) names.
-- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section.
+- `indexerHints.prune`: Defines the retention of historical block data for a Subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section.
-- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts.
+- `dataSources.source`: the address of the smart contract the Subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts.
- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created.
- `dataSources.source.endBlock`: The optional number of the block that the data source stops indexing at, including that block. Minimum spec version required: `0.0.9`.
-- `dataSources.context`: key-value pairs that can be used within subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for subgraph development.
+- `dataSources.context`: key-value pairs that can be used within Subgraph mappings. Supports various data types like `Bool`, `String`, `Int`, `Int8`, `BigDecimal`, `Bytes`, `List`, and `BigInt`. Each variable needs to specify its `type` and `data`. These context variables are then accessible in the mapping files, offering more configurable options for Subgraph development.
- `dataSources.mapping.entities`: the entities that the data source writes to the store. The schema for each entity is defined in the schema.graphql file.
- `dataSources.mapping.abis`: one or more named ABI files for the source contract as well as any other smart contracts that you interact with from within the mappings.
-- `dataSources.mapping.eventHandlers`: lists the smart contract events this subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store.
+- `dataSources.mapping.eventHandlers`: lists the smart contract events this Subgraph reacts to and the handlers in the mapping—./src/mapping.ts in the example—that transform these events into entities in the store.
-- `dataSources.mapping.callHandlers`: lists the smart contract functions this subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store.
+- `dataSources.mapping.callHandlers`: lists the smart contract functions this Subgraph reacts to and handlers in the mapping that transform the inputs and outputs to function calls into entities in the store.
-- `dataSources.mapping.blockHandlers`: lists the blocks this subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract.
+- `dataSources.mapping.blockHandlers`: lists the blocks this Subgraph reacts to and handlers in the mapping to run when a block is appended to the chain. Without a filter, the block handler will be run every block. An optional call-filter can be provided by adding a `filter` field with `kind: call` to the handler. This will only run the handler if the block contains at least one call to the data source contract.
-A single subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array.
+A single Subgraph can index data from multiple smart contracts. Add an entry for each contract from which data needs to be indexed to the `dataSources` array.
## Event Handlers
-Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic.
+Event handlers in a Subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the Subgraph's manifest. This enables Subgraphs to process and store event data according to defined logic.
### Defining an Event Handler
-An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected.
+An event handler is declared within a data source in the Subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected.
```yaml
dataSources:
@@ -131,7 +131,7 @@ dataSources:
abi: Gravity
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Gravatar
@@ -149,11 +149,11 @@ dataSources:
## Zpracovatelé hovorů
-While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured.
+While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a Subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured.
Obsluhy volání se spustí pouze v jednom ze dvou případů: když je zadaná funkce volána jiným účtem než samotnou smlouvou nebo když je v Solidity označena jako externí a volána jako součást jiné funkce ve stejné smlouvě.
-> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network.
+> **Note:** Call handlers currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more call handlers, it will not start syncing. Subgraph developers should instead use event handlers. These are far more performant than call handlers, and are supported on every evm network.
### Definice obsluhy volání
@@ -169,7 +169,7 @@ dataSources:
abi: Gravity
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Gravatar
@@ -186,7 +186,7 @@ The `function` is the normalized function signature to filter calls by. The `han
### Funkce mapování
-Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument:
+Each call handler takes a single parameter that has a type corresponding to the name of the called function. In the example Subgraph above, the mapping contains a handler for when the `createGravatar` function is called and receives a `CreateGravatarCall` parameter as an argument:
```typescript
import { CreateGravatarCall } from '../generated/Gravity/Gravity'
@@ -205,7 +205,7 @@ The `handleCreateGravatar` function takes a new `CreateGravatarCall` which is a
## Obsluha bloků
-Kromě přihlášení k událostem smlouvy nebo volání funkcí může podgraf chtít aktualizovat svá data, když jsou do řetězce přidány nové bloky. Za tímto účelem může podgraf spustit funkci po každém bloku nebo po blocích, které odpovídají předem definovanému filtru.
+In addition to subscribing to contract events or function calls, a Subgraph may want to update its data as new blocks are appended to the chain. To achieve this a Subgraph can run a function after every block or after blocks that match a pre-defined filter.
### Podporované filtry
@@ -218,7 +218,7 @@ filter:
_The defined handler will be called once for every block which contains a call to the contract (data source) the handler is defined under._
-> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing.
+> **Note:** The `call` filter currently depend on the Parity tracing API. Certain networks, such as BNB chain and Arbitrum, does not support this API. If a Subgraph indexing one of these networks contain one or more block handlers with a `call` filter, it will not start syncing.
Protože pro obsluhu bloku neexistuje žádný filtr, zajistí, že obsluha bude volána každý blok. Zdroj dat může obsahovat pouze jednu blokovou obsluhu pro každý typ filtru.
@@ -232,7 +232,7 @@ dataSources:
abi: Gravity
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
entities:
- Gravatar
@@ -261,7 +261,7 @@ blockHandlers:
every: 10
```
-The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the subgraph to perform specific operations at regular block intervals.
+The defined handler will be called once for every `n` blocks, where `n` is the value provided in the `every` field. This configuration allows the Subgraph to perform specific operations at regular block intervals.
#### Jednou Filtr
@@ -276,7 +276,7 @@ blockHandlers:
kind: once
```
-Definovaný obslužná rutina s filtrem once bude zavolána pouze jednou před spuštěním všech ostatních rutin. Tato konfigurace umožňuje, aby podgraf používal obslužný program jako inicializační obslužný, který provádí specifické úlohy na začátku indexování.
+The defined handler with the once filter will be called only once before all other handlers run. This configuration allows the Subgraph to use the handler as an initialization handler, performing specific tasks at the start of indexing.
```ts
export function handleOnce(block: ethereum.Block): void {
@@ -288,7 +288,7 @@ export function handleOnce(block: ethereum.Block): void {
### Funkce mapování
-The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing subgraph entities in the store, call smart contracts and create or update entities.
+The mapping function will receive an `ethereum.Block` as its only argument. Like mapping functions for events, this function can access existing Subgraph entities in the store, call smart contracts and create or update entities.
```typescript
import { ethereum } from '@graphprotocol/graph-ts'
@@ -317,7 +317,7 @@ An event will only be triggered when both the signature and topic 0 match. By de
Starting from `specVersion` `0.0.5` and `apiVersion` `0.0.7`, event handlers can have access to the receipt for the transaction which emitted them.
-To do so, event handlers must be declared in the subgraph manifest with the new `receipt: true` key, which is optional and defaults to false.
+To do so, event handlers must be declared in the Subgraph manifest with the new `receipt: true` key, which is optional and defaults to false.
```yaml
eventHandlers:
@@ -360,7 +360,7 @@ dataSources:
abi: Factory
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/mappings/factory.ts
entities:
@@ -390,7 +390,7 @@ templates:
abi: Exchange
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/mappings/exchange.ts
entities:
@@ -454,7 +454,7 @@ There are setters and getters like `setString` and `getString` for all value typ
## Výchozí bloky
-The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created.
+The `startBlock` is an optional setting that allows you to define from which block in the chain the data source will start indexing. Setting the start block allows the data source to skip potentially millions of blocks that are irrelevant. Typically, a Subgraph developer will set `startBlock` to the block in which the smart contract of the data source was created.
```yaml
dataSources:
@@ -467,7 +467,7 @@ dataSources:
startBlock: 6627917
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/mappings/factory.ts
entities:
@@ -488,13 +488,13 @@ dataSources:
## Tipy indexátor
-The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning.
+The `indexerHints` setting in a Subgraph's manifest provides directives for indexers on processing and managing a Subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning.
> This feature is available from `specVersion: 1.0.0`
### Prořezávat
-`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include:
+`indexerHints.prune`: Defines the retention of historical block data for a Subgraph. Options include:
1. `"never"`: No pruning of historical data; retains the entire history.
2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance.
@@ -505,19 +505,19 @@ The `indexerHints` setting in a subgraph's manifest provides directives for inde
prune: auto
```
-> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities.
+> The term "history" in this context of Subgraphs is about storing data that reflects the old states of mutable entities.
History as of a given block is required for:
-- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history
-- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block
-- Rewinding the subgraph back to that block
+- [Time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the Subgraph's history
+- Using the Subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another Subgraph, at that block
+- Rewinding the Subgraph back to that block
If historical data as of the block has been pruned, the above capabilities will not be available.
> Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data.
-For subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings:
+For Subgraphs leveraging [time travel queries](/subgraphs/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your Subgraph's settings:
Uchování určitého množství historických dat:
@@ -532,3 +532,18 @@ Zachování kompletní historie entitních států:
indexerHints:
prune: never
```
+
+## SpecVersion Releases
+
+| Verze | Poznámky vydání |
+| :---: | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 1.3.0 | Added support for [Subgraph Composition](/cookbook/subgraph-composition-three-sources) |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/developing/creating/advanced/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](/developing/creating/advanced/#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating/subgraph-manifest/#indexer-hints) feature to prune Subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
+| 0.0.8 | Added support for polling [Block Handlers](/developing/creating/subgraph-manifest/#polling-filter) and [Initialisation Handlers](/developing/creating/subgraph-manifest/#once-filter). |
+| 0.0.7 | Added support for [File Data Sources](/developing/creating/advanced/#ipfsarweave-file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/indexing/overview/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
diff --git a/website/src/pages/cs/subgraphs/developing/creating/unit-testing-framework.mdx b/website/src/pages/cs/subgraphs/developing/creating/unit-testing-framework.mdx
index fd0130dd672a..691624b81344 100644
--- a/website/src/pages/cs/subgraphs/developing/creating/unit-testing-framework.mdx
+++ b/website/src/pages/cs/subgraphs/developing/creating/unit-testing-framework.mdx
@@ -2,12 +2,12 @@
title: Rámec pro testování jednotek
---
-Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their subgraphs.
+Learn how to use Matchstick, a unit testing framework developed by [LimeChain](https://limechain.tech/). Matchstick enables Subgraph developers to test their mapping logic in a sandboxed environment and successfully deploy their Subgraphs.
## Benefits of Using Matchstick
- It's written in Rust and optimized for high performance.
-- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor subgraph failures, check test performance, and many more.
+- It gives you access to developer features, including the ability to mock contract calls, make assertions about the store state, monitor Subgraph failures, check test performance, and many more.
## Začínáme
@@ -87,7 +87,7 @@ And finally, do not use `graph test` (which uses your global installation of gra
### Using Matchstick
-To use **Matchstick** in your subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified).
+To use **Matchstick** in your Subgraph project just open up a terminal, navigate to the root folder of your project and simply run `graph test [options] ` - it downloads the latest **Matchstick** binary and runs the specified test or all tests in a test folder (or all existing tests if no datasource flag is specified).
### Možnosti CLI
@@ -113,7 +113,7 @@ graph test path/to/file.test.ts
```sh
-c, --coverage Run the tests in coverage mode
--d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the subgraph)
+-d, --docker Run the tests in a docker container (Note: Please execute from the root folder of the Subgraph)
-f, --force Binary: Redownloads the binary. Docker: Redownloads the Dockerfile and rebuilds the docker image.
-h, --help Show usage information
-l, --logs Logs to the console information about the OS, CPU model and download url (debugging purposes)
@@ -145,17 +145,17 @@ libsFolder: path/to/libs
manifestPath: path/to/subgraph.yaml
```
-### Ukázkový podgraf
+### Demo Subgraph
You can try out and play around with the examples from this guide by cloning the [Demo Subgraph repo](https://github.com/LimeChain/demo-subgraph)
### Videonávody
-Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h)
+Also you can check out the video series on ["How to use Matchstick to write unit tests for your Subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h)
## Tests structure
-_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_
+_**IMPORTANT: The test structure described below depends on `matchstick-as` version >=0.5.0**_
### describe()
@@ -662,7 +662,7 @@ That's a lot to unpack! First off, an important thing to notice is that we're im
A je to tady - vytvořili jsme první test! 👏
-Pro spuštění našich testů nyní stačí v kořenové složce podgrafu spustit následující příkaz:
+Now in order to run our tests you simply need to run the following in your Subgraph root folder:
`graph test Gravity`
@@ -756,7 +756,7 @@ createMockedFunction(contractAddress, 'getGravatar', 'getGravatar(address):(stri
Users can mock IPFS files by using `mockIpfsFile(hash, filePath)` function. The function accepts two arguments, the first one is the IPFS file hash/path and the second one is the path to a local file.
-NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstck to detect it, like the `processGravatar()` function in the test example bellow:
+NOTE: When testing `ipfs.map/ipfs.mapJSON`, the callback function must be exported from the test file in order for matchstick to detect it, like the `processGravatar()` function in the test example bellow:
`.test.ts` file:
@@ -765,7 +765,7 @@ import { assert, test, mockIpfsFile } from 'matchstick-as/assembly/index'
import { ipfs } from '@graphprotocol/graph-ts'
import { gravatarFromIpfs } from './utils'
-// Export ipfs.map() callback in order for matchstck to detect it
+// Export ipfs.map() callback in order for matchstick to detect it
export { processGravatar } from './utils'
test('ipfs.cat', () => {
@@ -1172,7 +1172,7 @@ templates:
network: mainnet
mapping:
kind: ethereum/events
- apiVersion: 0.0.6
+ apiVersion: 0.0.9
language: wasm/assemblyscript
file: ./src/token-lock-wallet.ts
handler: handleMetadata
@@ -1289,7 +1289,7 @@ test('file/ipfs dataSource creation example', () => {
## Pokrytí test
-Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests.
+Using **Matchstick**, Subgraph developers are able to run a script that will calculate the test coverage of the written unit tests.
The test coverage tool takes the compiled test `wasm` binaries and converts them to `wat` files, which can then be easily inspected to see whether or not the handlers defined in `subgraph.yaml` have been called. Since code coverage (and testing as whole) is in very early stages in AssemblyScript and WebAssembly, **Matchstick** cannot check for branch coverage. Instead we rely on the assertion that if a given handler has been called, the event/function for it have been properly mocked.
@@ -1395,7 +1395,7 @@ The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as
## Další zdroje
-For any additional support, check out this [demo subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_).
+For any additional support, check out this [demo Subgraph repo using Matchstick](https://github.com/LimeChain/demo-subgraph#readme_).
## Zpětná vazba
diff --git a/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx b/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx
index 77f05e1ad499..e9848601ebc7 100644
--- a/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx
+++ b/website/src/pages/cs/subgraphs/developing/deploying/multiple-networks.mdx
@@ -1,12 +1,13 @@
---
title: Deploying a Subgraph to Multiple Networks
+sidebarTitle: Deploying to Multiple Networks
---
-This page explains how to deploy a subgraph to multiple networks. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [Creating a subgraph](/developing/creating-a-subgraph/).
+This page explains how to deploy a Subgraph to multiple networks. To deploy a Subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a Subgraph already, see [Creating a Subgraph](/developing/creating-a-subgraph/).
-## Nasazení podgrafu do více sítí
+## Deploying the Subgraph to multiple networks
-V některých případech budete chtít nasadit stejný podgraf do více sítí, aniž byste museli duplikovat celý jeho kód. Hlavním problémem, který s tím souvisí, je skutečnost, že smluvní adresy v těchto sítích jsou různé.
+In some cases, you will want to deploy the same Subgraph to multiple networks without duplicating all of its code. The main challenge that comes with this is that the contract addresses on these networks are different.
### Using `graph-cli`
@@ -20,7 +21,7 @@ Options:
--network-file Networks config file path (default: "./networks.json")
```
-You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your subgraph during development.
+You can use the `--network` option to specify a network configuration from a `json` standard file (defaults to `networks.json`) to easily update your Subgraph during development.
> Note: The `init` command will now auto-generate a `networks.json` based on the provided information. You will then be able to update existing or add additional networks.
@@ -54,7 +55,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit
> Note: You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option.
-Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`:
+Now, let's assume you want to be able to deploy your Subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`:
```yaml
# ...
@@ -96,7 +97,7 @@ yarn build --network sepolia
yarn build --network sepolia --network-file path/to/config
```
-The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this:
+The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the Subgraph. Your `subgraph.yaml` file now should look like this:
```yaml
# ...
@@ -127,7 +128,7 @@ yarn deploy --network sepolia --network-file path/to/config
One way to parameterize aspects like contract addresses using older `graph-cli` versions is to generate parts of it with a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/).
-To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network:
+To illustrate this approach, let's assume a Subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network:
```json
{
@@ -179,7 +180,7 @@ In order to generate a manifest to either network, you could add two additional
}
```
-To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands:
+To deploy this Subgraph for mainnet or Sepolia you would now simply run one of the two following commands:
```sh
# Mainnet:
@@ -193,25 +194,25 @@ A working example of this can be found [here](https://github.com/graphprotocol/e
**Note:** This approach can also be applied to more complex situations, where it is necessary to substitute more than contract addresses and network names or where generating mappings or ABIs from templates as well.
-This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error.
+This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error.
-## Zásady archivace subgrafů Subgraph Studio
+## Subgraph Studio Subgraph archive policy
-A subgraph version in Studio is archived if and only if it meets the following criteria:
+A Subgraph version in Studio is archived if and only if it meets the following criteria:
- The version is not published to the network (or pending publish)
- The version was created 45 or more days ago
-- The subgraph hasn't been queried in 30 days
+- The Subgraph hasn't been queried in 30 days
-In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived.
+In addition, when a new version is deployed, if the Subgraph has not been published, then the N-2 version of the Subgraph is archived.
-Každý podgraf ovlivněný touto zásadou má možnost vrátit danou verzi zpět.
+Every Subgraph affected with this policy has an option to bring the version in question back.
-## Kontrola stavu podgrafů
+## Checking Subgraph health
-Pokud se podgraf úspěšně synchronizuje, je to dobré znamení, že bude dobře fungovat navždy. Nové spouštěče v síti však mohou způsobit, že se podgraf dostane do neověřeného chybového stavu, nebo může začít zaostávat kvůli problémům s výkonem či operátory uzlů.
+If a Subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your Subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators.
-Graph Node exposes a GraphQL endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph:
+Graph Node exposes a GraphQL endpoint which you can query to check the status of your Subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a Subgraph:
```graphql
{
@@ -238,4 +239,4 @@ Graph Node exposes a GraphQL endpoint which you can query to check the status of
}
```
-This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your subgraph to check if it is running behind. `synced` informs if the subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the subgraph. In this case, you can check the `fatalError` field for details on this error.
+This will give you the `chainHeadBlock` which you can compare with the `latestBlock` on your Subgraph to check if it is running behind. `synced` informs if the Subgraph has ever caught up to the chain. `health` can currently take the values of `healthy` if no errors occurred, or `failed` if there was an error which halted the progress of the Subgraph. In this case, you can check the `fatalError` field for details on this error.
diff --git a/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx b/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx
index 7c53f174237a..14be0175123c 100644
--- a/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx
+++ b/website/src/pages/cs/subgraphs/developing/deploying/using-subgraph-studio.mdx
@@ -2,23 +2,23 @@
title: Deploying Using Subgraph Studio
---
-Learn how to deploy your subgraph to Subgraph Studio.
+Learn how to deploy your Subgraph to Subgraph Studio.
-> Note: When you deploy a subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a subgraph, you're publishing it onchain.
+> Note: When you deploy a Subgraph, you push it to Subgraph Studio, where you'll be able to test it. It's important to remember that deploying is not the same as publishing. When you publish a Subgraph, you're publishing it onchain.
## Subgraph Studio Overview
In [Subgraph Studio](https://thegraph.com/studio/), you can do the following:
-- View a list of subgraphs you've created
-- Manage, view details, and visualize the status of a specific subgraph
-- Vytváření a správa klíčů API pro konkrétní podgrafy
+- View a list of Subgraphs you've created
+- Manage, view details, and visualize the status of a specific Subgraph
+- Create and manage your API keys for specific Subgraphs
- Restrict your API keys to specific domains and allow only certain Indexers to query with them
-- Create your subgraph
-- Deploy your subgraph using The Graph CLI
-- Test your subgraph in the playground environment
-- Integrate your subgraph in staging using the development query URL
-- Publish your subgraph to The Graph Network
+- Create your Subgraph
+- Deploy your Subgraph using The Graph CLI
+- Test your Subgraph in the playground environment
+- Integrate your Subgraph in staging using the development query URL
+- Publish your Subgraph to The Graph Network
- Manage your billing
## Install The Graph CLI
@@ -44,10 +44,10 @@ npm install -g @graphprotocol/graph-cli
1. Open [Subgraph Studio](https://thegraph.com/studio/).
2. Connect your wallet to sign in.
- You can do this via MetaMask, Coinbase Wallet, WalletConnect, or Safe.
-3. After you sign in, your unique deploy key will be displayed on your subgraph details page.
- - The deploy key allows you to publish your subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised.
+3. After you sign in, your unique deploy key will be displayed on your Subgraph details page.
+ - The deploy key allows you to publish your Subgraphs or manage your API keys and billing. It is unique but can be regenerated if you think it has been compromised.
-> Important: You need an API key to query subgraphs
+> Important: You need an API key to query Subgraphs
### Jak vytvořit podgraf v Podgraf Studio
@@ -57,31 +57,25 @@ npm install -g @graphprotocol/graph-cli
### Kompatibilita podgrafů se sítí grafů
-Aby mohly být podgrafy podporovány indexátory v síti grafů, musí:
-
-- Index a [supported network](/supported-networks/)
-- Nesmí používat žádnou z následujících funkcí:
- - ipfs.cat & ipfs.map
- - Nefatální
- - Roubování
+To be supported by Indexers on The Graph Network, Subgraphs must index a [supported network](/supported-networks/). For a full list of supported and unsupported features, check out the [Feature Support Matrix](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md) repo.
## Initialize Your Subgraph
-Once your subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command:
+Once your Subgraph has been created in Subgraph Studio, you can initialize its code through the CLI using this command:
```bash
graph init
```
-You can find the `` value on your subgraph details page in Subgraph Studio, see image below:
+You can find the `` value on your Subgraph details page in Subgraph Studio, see image below:

-After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your subgraph. You can then finalize your subgraph to make sure it works as expected.
+After running `graph init`, you will be asked to input the contract address, network, and an ABI that you want to query. This will generate a new folder on your local machine with some basic code to start working on your Subgraph. You can then finalize your Subgraph to make sure it works as expected.
## Autorizace grafu
-Before you can deploy your subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your subgraph details page.
+Before you can deploy your Subgraph to Subgraph Studio, you need to log into your account within the CLI. To do this, you will need your deploy key, which you can find under your Subgraph details page.
Then, use the following command to authenticate from the CLI:
@@ -91,11 +85,11 @@ graph auth
## Deploying a Subgraph
-Once you are ready, you can deploy your subgraph to Subgraph Studio.
+Once you are ready, you can deploy your Subgraph to Subgraph Studio.
-> Deploying a subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your subgraph to the decentralized network.
+> Deploying a Subgraph with the CLI pushes it to the Studio, where you can test it and update the metadata. This action won't publish your Subgraph to the decentralized network.
-Use the following CLI command to deploy your subgraph:
+Use the following CLI command to deploy your Subgraph:
```bash
graph deploy
@@ -108,30 +102,30 @@ After running this command, the CLI will ask for a version label.
## Testing Your Subgraph
-After deploying, you can test your subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready.
+After deploying, you can test your Subgraph (either in Subgraph Studio or in your own app, with the deployment query URL), deploy another version, update the metadata, and publish to [Graph Explorer](https://thegraph.com/explorer) when you are ready.
-Use Subgraph Studio to check the logs on the dashboard and look for any errors with your subgraph.
+Use Subgraph Studio to check the logs on the dashboard and look for any errors with your Subgraph.
## Publish Your Subgraph
-In order to publish your subgraph successfully, review [publishing a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
+In order to publish your Subgraph successfully, review [publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
## Versioning Your Subgraph with the CLI
-If you want to update your subgraph, you can do the following:
+If you want to update your Subgraph, you can do the following:
- You can deploy a new version to Studio using the CLI (it will only be private at this point).
- Once you're happy with it, you can publish your new deployment to [Graph Explorer](https://thegraph.com/explorer).
-- This action will create a new version of your subgraph that Curators can start signaling on and Indexers can index.
+- This action will create a new version of your Subgraph that Curators can start signaling on and Indexers can index.
-You can also update your subgraph's metadata without publishing a new version. You can update your subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates subgraph details in Explorer without having to publish a new version with a new deployment.
+You can also update your Subgraph's metadata without publishing a new version. You can update your Subgraph details in Studio (under the profile picture, name, description, etc.) by checking an option called **Update Details** in [Graph Explorer](https://thegraph.com/explorer). If this is checked, an onchain transaction will be generated that updates Subgraph details in Explorer without having to publish a new version with a new deployment.
-> Note: There are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/).
+> Note: There are costs associated with publishing a new version of a Subgraph to the network. In addition to the transaction fees, you must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your Subgraph if Curators have not signaled on it. For more information, please read more [here](/resources/roles/curating/).
## Automatická archivace verzí podgrafů
-Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in Subgraph Studio.
+Whenever you deploy a new Subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your Subgraph in Subgraph Studio.
-> Note: Previous versions of non-published subgraphs deployed to Studio will be automatically archived.
+> Note: Previous versions of non-published Subgraphs deployed to Studio will be automatically archived.

diff --git a/website/src/pages/cs/subgraphs/developing/developer-faq.mdx b/website/src/pages/cs/subgraphs/developing/developer-faq.mdx
index e07a7f06fb48..2c5d8903c4d9 100644
--- a/website/src/pages/cs/subgraphs/developing/developer-faq.mdx
+++ b/website/src/pages/cs/subgraphs/developing/developer-faq.mdx
@@ -7,37 +7,37 @@ This page summarizes some of the most common questions for developers building o
## Subgraph Related
-### 1. Co je to podgraf?
+### 1. What is a Subgraph?
-A subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process subgraphs and make them available for subgraph consumers to query.
+A Subgraph is a custom API built on blockchain data. Subgraphs are queried using the GraphQL query language and are deployed to a Graph Node using The Graph CLI. Once deployed and published to The Graph's decentralized network, Indexers process Subgraphs and make them available for Subgraph consumers to query.
-### 2. What is the first step to create a subgraph?
+### 2. What is the first step to create a Subgraph?
-To successfully create a subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/).
+To successfully create a Subgraph, you will need to install The Graph CLI. Review the [Quick Start](/subgraphs/quick-start/) to get started. For detailed information, see [Creating a Subgraph](/developing/creating-a-subgraph/).
-### 3. Can I still create a subgraph if my smart contracts don't have events?
+### 3. Can I still create a Subgraph if my smart contracts don't have events?
-It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the subgraph are triggered by contract events and are the fastest way to retrieve useful data.
+It is highly recommended that you structure your smart contracts to have events associated with data you are interested in querying. Event handlers in the Subgraph are triggered by contract events and are the fastest way to retrieve useful data.
-If the contracts you work with do not contain events, your subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower.
+If the contracts you work with do not contain events, your Subgraph can use call and block handlers to trigger indexing. However, this is not recommended, as performance will be significantly slower.
-### 4. Mohu změnit účet GitHub přidružený k mému podgrafu?
+### 4. Can I change the GitHub account associated with my Subgraph?
-No. Once a subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your subgraph.
+No. Once a Subgraph is created, the associated GitHub account cannot be changed. Please make sure to carefully consider this before creating your Subgraph.
-### 5. How do I update a subgraph on mainnet?
+### 5. How do I update a Subgraph on mainnet?
-You can deploy a new version of your subgraph to Subgraph Studio using the CLI. This action maintains your subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on.
+You can deploy a new version of your Subgraph to Subgraph Studio using the CLI. This action maintains your Subgraph private, but once you’re happy with it, you can publish to Graph Explorer. This will create a new version of your Subgraph that Curators can start signaling on.
-### 6. Is it possible to duplicate a subgraph to another account or endpoint without redeploying?
+### 6. Is it possible to duplicate a Subgraph to another account or endpoint without redeploying?
-Podgraf musíte znovu nasadit, ale pokud se ID podgrafu (hash IPFS) nezmění, nebude se muset synchronizovat od začátku.
+You have to redeploy the Subgraph, but if the Subgraph ID (IPFS hash) doesn't change, it won't have to sync from the beginning.
-### 7. How do I call a contract function or access a public state variable from my subgraph mappings?
+### 7. How do I call a contract function or access a public state variable from my Subgraph mappings?
Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/#access-to-smart-contract-state).
-### 8. Can I import `ethers.js` or other JS libraries into my subgraph mappings?
+### 8. Can I import `ethers.js` or other JS libraries into my Subgraph mappings?
Not currently, as mappings are written in AssemblyScript.
@@ -45,15 +45,15 @@ One possible alternative solution to this is to store raw data in entities and p
### 9. When listening to multiple contracts, is it possible to select the contract order to listen to events?
-V rámci podgrafu se události zpracovávají vždy v pořadí, v jakém se objevují v blocích, bez ohledu na to, zda se jedná o více smluv.
+Within a Subgraph, the events are always processed in the order they appear in the blocks, regardless of whether that is across multiple contracts or not.
### 10. How are templates different from data sources?
-Templates allow you to create data sources quickly, while your subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your subgraph will create a dynamic data source by supplying the contract address.
+Templates allow you to create data sources quickly, while your Subgraph is indexing. Your contract might spawn new contracts as people interact with it. Since you know the shape of those contracts (ABI, events, etc.) upfront, you can define how you want to index them in a template. When they are spawned, your Subgraph will create a dynamic data source by supplying the contract address.
Check out the "Instantiating a data source template" section on: [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates).
-### 11. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`?
+### 11. Is it possible to set up a Subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another dataSource in `subgraph.yaml` after running `graph init`?
Yes. On `graph init` command itself you can add multiple dataSources by entering contracts one after the other.
@@ -79,9 +79,9 @@ docker pull graphprotocol/graph-node:latest
If only one entity is created during the event and if there's nothing better available, then the transaction hash + log index would be unique. You can obfuscate these by converting that to Bytes and then piping it through `crypto.keccak256` but this won't make it more unique.
-### 15. Can I delete my subgraph?
+### 15. Can I delete my Subgraph?
-Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your subgraph.
+Yes, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) and [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) your Subgraph.
## Network Related
@@ -110,11 +110,11 @@ Yes. Sepolia supports block handlers, call handlers and event handlers. It shoul
Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the dataSource starts indexing from. In most cases, we suggest using the block where the contract was created: [Start blocks](/developing/creating-a-subgraph/#start-blocks)
-### 20. What are some tips to increase the performance of indexing? My subgraph is taking a very long time to sync
+### 20. What are some tips to increase the performance of indexing? My Subgraph is taking a very long time to sync
Yes, you should take a look at the optional start block feature to start indexing from the block where the contract was deployed: [Start blocks](/developing/creating-a-subgraph/#start-blocks)
-### 21. Is there a way to query the subgraph directly to determine the latest block number it has indexed?
+### 21. Is there a way to query the Subgraph directly to determine the latest block number it has indexed?
Ano! Vyzkoušejte následující příkaz, přičemž "organization/subgraphName" nahraďte názvem organizace, pod kterou je publikován, a názvem vašeho podgrafu:
@@ -132,7 +132,7 @@ someCollection(first: 1000, skip: ) { ... }
### 23. If my dapp frontend uses The Graph for querying, do I need to write my API key into the frontend directly? What if we pay query fees for users – will malicious users cause our query fees to be very high?
-Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients.
+Currently, the recommended approach for a dapp is to add the key to the frontend and expose it to end users. That said, you can limit that key to a hostname, like _yourdapp.io_ and Subgraph. The gateway is currently being run by Edge & Node. Part of the responsibility of a gateway is to monitor for abusive behavior and block traffic from malicious clients.
## Miscellaneous
diff --git a/website/src/pages/cs/subgraphs/developing/introduction.mdx b/website/src/pages/cs/subgraphs/developing/introduction.mdx
index 110d7639aded..b040c749c6ca 100644
--- a/website/src/pages/cs/subgraphs/developing/introduction.mdx
+++ b/website/src/pages/cs/subgraphs/developing/introduction.mdx
@@ -11,21 +11,21 @@ As a developer, you need data to build and power your dapp. Querying and indexin
On The Graph, you can:
-1. Create, deploy, and publish subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/).
-2. Use GraphQL to query existing subgraphs.
+1. Create, deploy, and publish Subgraphs to The Graph using Graph CLI and [Subgraph Studio](https://thegraph.com/studio/).
+2. Use GraphQL to query existing Subgraphs.
### What is GraphQL?
-- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs.
+- [GraphQL](https://graphql.org/learn/) is the query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
### Developer Actions
-- Query subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps.
-- Create custom subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers.
-- Deploy, publish and signal your subgraphs within The Graph Network.
+- Query Subgraphs built by other developers in [The Graph Network](https://thegraph.com/explorer) and integrate them into your own dapps.
+- Create custom Subgraphs to fulfill specific data needs, allowing improved scalability and flexibility for other developers.
+- Deploy, publish and signal your Subgraphs within The Graph Network.
-### What are subgraphs?
+### What are Subgraphs?
-A subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL.
+A Subgraph is a custom API built on blockchain data. It extracts data from a blockchain, processes it, and stores it so that it can be easily queried via GraphQL.
-Check out the documentation on [subgraphs](/subgraphs/developing/subgraphs/) to learn specifics.
+Check out the documentation on [Subgraphs](/subgraphs/developing/subgraphs/) to learn specifics.
diff --git a/website/src/pages/cs/subgraphs/developing/managing/deleting-a-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/managing/deleting-a-subgraph.mdx
index 77896e36a45d..b8c2330ca49d 100644
--- a/website/src/pages/cs/subgraphs/developing/managing/deleting-a-subgraph.mdx
+++ b/website/src/pages/cs/subgraphs/developing/managing/deleting-a-subgraph.mdx
@@ -2,30 +2,30 @@
title: Deleting a Subgraph
---
-Delete your subgraph using [Subgraph Studio](https://thegraph.com/studio/).
+Delete your Subgraph using [Subgraph Studio](https://thegraph.com/studio/).
-> Deleting your subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it.
+> Deleting your Subgraph will remove all published versions from The Graph Network, but it will remain visible on Graph Explorer and Subgraph Studio for users who have signaled on it.
## Step-by-Step
-1. Visit the subgraph's page on [Subgraph Studio](https://thegraph.com/studio/).
+1. Visit the Subgraph's page on [Subgraph Studio](https://thegraph.com/studio/).
2. Click on the three-dots to the right of the "publish" button.
-3. Click on the option to "delete this subgraph":
+3. Click on the option to "delete this Subgraph":

-4. Depending on the subgraph's status, you will be prompted with various options.
+4. Depending on the Subgraph's status, you will be prompted with various options.
- - If the subgraph is not published, simply click “delete” and confirm.
- - If the subgraph is published, you will need to confirm on your wallet before the subgraph can be deleted from Studio. If a subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required.
+ - If the Subgraph is not published, simply click “delete” and confirm.
+ - If the Subgraph is published, you will need to confirm on your wallet before the Subgraph can be deleted from Studio. If a Subgraph is published to multiple networks, such as testnet and mainnet, additional steps may be required.
-> If the owner of the subgraph has signal on it, the signaled GRT will be returned to the owner.
+> If the owner of the Subgraph has signal on it, the signaled GRT will be returned to the owner.
### Important Reminders
-- Once you delete a subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal.
-- Kurátoři již nebudou moci signalizovat na podgrafu.
-- Curators that already signaled on the subgraph can withdraw their signal at an average share price.
-- Deleted subgraphs will show an error message.
+- Once you delete a Subgraph, it will **not** appear on Graph Explorer's homepage. However, users who have signaled on it will still be able to view it on their profile pages and remove their signal.
+- Curators will not be able to signal on the Subgraph anymore.
+- Curators that already signaled on the Subgraph can withdraw their signal at an average share price.
+- Deleted Subgraphs will show an error message.
diff --git a/website/src/pages/cs/subgraphs/developing/managing/transferring-a-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/managing/transferring-a-subgraph.mdx
index 0fc6632cbc40..e80bde3fa6d2 100644
--- a/website/src/pages/cs/subgraphs/developing/managing/transferring-a-subgraph.mdx
+++ b/website/src/pages/cs/subgraphs/developing/managing/transferring-a-subgraph.mdx
@@ -2,18 +2,18 @@
title: Transferring a Subgraph
---
-Subgraphs published to the decentralized network have an NFT minted to the address that published the subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network.
+Subgraphs published to the decentralized network have an NFT minted to the address that published the Subgraph. The NFT is based on a standard ERC721, which facilitates transfers between accounts on The Graph Network.
## Reminders
-- Whoever owns the NFT controls the subgraph.
-- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that subgraph on the network.
-- You can easily move control of a subgraph to a multi-sig.
-- A community member can create a subgraph on behalf of a DAO.
+- Whoever owns the NFT controls the Subgraph.
+- If the owner decides to sell or transfer the NFT, they will no longer be able to edit or update that Subgraph on the network.
+- You can easily move control of a Subgraph to a multi-sig.
+- A community member can create a Subgraph on behalf of a DAO.
## View Your Subgraph as an NFT
-To view your subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**:
+To view your Subgraph as an NFT, you can visit an NFT marketplace like **OpenSea**:
```
https://opensea.io/your-wallet-address
@@ -27,13 +27,13 @@ https://rainbow.me/your-wallet-addres
## Step-by-Step
-To transfer ownership of a subgraph, do the following:
+To transfer ownership of a Subgraph, do the following:
1. Use the UI built into Subgraph Studio:

-2. Choose the address that you would like to transfer the subgraph to:
+2. Choose the address that you would like to transfer the Subgraph to:

diff --git a/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx b/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx
index ed8846e26498..29c75273aa17 100644
--- a/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx
+++ b/website/src/pages/cs/subgraphs/developing/publishing/publishing-a-subgraph.mdx
@@ -1,10 +1,11 @@
---
title: Zveřejnění podgrafu v decentralizované síti
+sidebarTitle: Publishing to the Decentralized Network
---
-Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network.
+Once you have [deployed your Subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/) and it's ready to go into production, you can publish it to the decentralized network.
-When you publish a subgraph to the decentralized network, you make it available for:
+When you publish a Subgraph to the decentralized network, you make it available for:
- [Curators](/resources/roles/curating/) to begin curating it.
- [Indexers](/indexing/overview/) to begin indexing it.
@@ -17,33 +18,33 @@ Check out the list of [supported networks](/supported-networks/).
1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard
2. Click on the **Publish** button
-3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/).
+3. Your Subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/).
-All published versions of an existing subgraph can:
+All published versions of an existing Subgraph can:
- Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/archived/arbitrum/arbitrum-faq/).
-- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the subgraph was published.
+- Index data on any of the [supported networks](/supported-networks/), regardless of the network on which the Subgraph was published.
-### Aktualizace metadata publikovaného podgrafu
+### Updating metadata for a published Subgraph
-- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio.
+- After publishing your Subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio.
- Once you’ve saved your changes and published the updates, they will appear in Graph Explorer.
- It's important to note that this process will not create a new version since your deployment has not changed.
## Publishing from the CLI
-As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli).
+As of version 0.73.0, you can also publish your Subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli).
1. Open the `graph-cli`.
2. Use the following commands: `graph codegen && graph build` then `graph publish`.
-3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice.
+3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice.

### Customizing your deployment
-You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags:
+You can upload your Subgraph build to a specific IPFS node and further customize your deployment with the following flags:
```
USAGE
@@ -61,33 +62,33 @@ FLAGS
```
-## Přidání signálu do podgrafu
+## Adding signal to your Subgraph
-Developers can add GRT signal to their subgraphs to incentivize Indexers to query the subgraph.
+Developers can add GRT signal to their Subgraphs to incentivize Indexers to query the Subgraph.
-- If a subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled.
+- If a Subgraph is eligible for indexing rewards, Indexers who provide a "proof of indexing" will receive a GRT reward, based on the amount of GRT signalled.
-- You can check indexing reward eligibility based on subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md).
+- You can check indexing reward eligibility based on Subgraph feature usage [here](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md).
- Specific supported networks can be checked [here](/supported-networks/).
-> Přidání signálu do podgrafu, který nemá nárok na odměny, nepřiláká další indexátory.
+> Adding signal to a Subgraph which is not eligible for rewards will not attract additional Indexers.
>
-> If your subgraph is eligible for rewards, it is recommended that you curate your own subgraph with at least 3,000 GRT in order to attract additional indexers to index your subgraph.
+> If your Subgraph is eligible for rewards, it is recommended that you curate your own Subgraph with at least 3,000 GRT in order to attract additional indexers to index your Subgraph.
-The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs. However, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
+The [Sunrise Upgrade Indexer](/archived/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all Subgraphs. However, signaling GRT on a particular Subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
-When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
+When signaling, Curators can decide to signal on a specific version of the Subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
-Indexers can find subgraphs to index based on curation signals they see in Graph Explorer.
+Indexers can find Subgraphs to index based on curation signals they see in Graph Explorer.
-
+
-Subgraph Studio enables you to add signal to your subgraph by adding GRT to your subgraph's curation pool in the same transaction it is published.
+Subgraph Studio enables you to add signal to your Subgraph by adding GRT to your Subgraph's curation pool in the same transaction it is published.

-Případně můžete přidat signál GRT do publikovaného podgrafu z Průzkumníka grafů.
+Alternatively, you can add GRT signal to a published Subgraph from Graph Explorer.

diff --git a/website/src/pages/cs/subgraphs/developing/subgraphs.mdx b/website/src/pages/cs/subgraphs/developing/subgraphs.mdx
index f197aabdc49c..a998db9c316d 100644
--- a/website/src/pages/cs/subgraphs/developing/subgraphs.mdx
+++ b/website/src/pages/cs/subgraphs/developing/subgraphs.mdx
@@ -4,83 +4,83 @@ title: Podgrafy
## What is a Subgraph?
-A subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL.
+A Subgraph is a custom, open API that extracts data from a blockchain, processes it, and stores it so it can be easily queried via GraphQL.
### Subgraph Capabilities
- **Access Data:** Subgraphs enable the querying and indexing of blockchain data for web3.
-- **Build:** Developers can build, deploy, and publish subgraphs to The Graph Network. To get started, check out the subgraph developer [Quick Start](quick-start/).
-- **Index & Query:** Once a subgraph is indexed, anyone can query it. Explore and query all subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer).
+- **Build:** Developers can build, deploy, and publish Subgraphs to The Graph Network. To get started, check out the Subgraph developer [Quick Start](quick-start/).
+- **Index & Query:** Once a Subgraph is indexed, anyone can query it. Explore and query all Subgraphs published to the network in [Graph Explorer](https://thegraph.com/explorer).
## Inside a Subgraph
-The subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query.
+The Subgraph manifest, `subgraph.yaml`, defines the smart contracts & network your Subgraph will index, the events from these contracts to pay attention to, and how to map event data to entities that Graph Node stores and allows to query.
-The **subgraph definition** consists of the following files:
+The **Subgraph definition** consists of the following files:
-- `subgraph.yaml`: Contains the subgraph manifest
+- `subgraph.yaml`: Contains the Subgraph manifest
-- `schema.graphql`: A GraphQL schema defining the data stored for your subgraph and how to query it via GraphQL
+- `schema.graphql`: A GraphQL schema defining the data stored for your Subgraph and how to query it via GraphQL
- `mapping.ts`: [AssemblyScript Mappings](https://github.com/AssemblyScript/assemblyscript) code that translates event data into entities defined in your schema
-To learn more about each subgraph component, check out [creating a subgraph](/developing/creating-a-subgraph/).
+To learn more about each Subgraph component, check out [creating a Subgraph](/developing/creating-a-subgraph/).
## Životní cyklus podgrafů
-Here is a general overview of a subgraph’s lifecycle:
+Here is a general overview of a Subgraph’s lifecycle:

## Subgraph Development
-1. [Create a subgraph](/developing/creating-a-subgraph/)
-2. [Deploy a subgraph](/deploying/deploying-a-subgraph-to-studio/)
-3. [Test a subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio)
-4. [Publish a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/)
-5. [Signal on a subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph)
+1. [Create a Subgraph](/developing/creating-a-subgraph/)
+2. [Deploy a Subgraph](/deploying/deploying-a-subgraph-to-studio/)
+3. [Test a Subgraph](/deploying/subgraph-studio/#testing-your-subgraph-in-subgraph-studio)
+4. [Publish a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/)
+5. [Signal on a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/#adding-signal-to-your-subgraph)
### Build locally
-Great subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust subgraphs.
+Great Subgraphs start with a local development environment and unit tests. Developers use [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), a command-line interface tool for building and deploying Subgraphs on The Graph. They can also use [Graph TypeScript](/subgraphs/developing/creating/graph-ts/README/) and [Matchstick](/subgraphs/developing/creating/unit-testing-framework/) to create robust Subgraphs.
### Deploy to Subgraph Studio
-Once defined, a subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following:
+Once defined, a Subgraph can be [deployed to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio/). In Subgraph Studio, you can do the following:
-- Use its staging environment to index the deployed subgraph and make it available for review.
-- Verify that your subgraph doesn't have any indexing errors and works as expected.
+- Use its staging environment to index the deployed Subgraph and make it available for review.
+- Verify that your Subgraph doesn't have any indexing errors and works as expected.
### Publish to the Network
-When you're happy with your subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network.
+When you're happy with your Subgraph, you can [publish it](/subgraphs/developing/publishing/publishing-a-subgraph/) to The Graph Network.
-- This is an onchain action, which registers the subgraph and makes it discoverable by Indexers.
-- Published subgraphs have a corresponding NFT, which defines the ownership of the subgraph. You can [transfer the subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT.
-- Published subgraphs have associated metadata, which provides other network participants with useful context and information.
+- This is an onchain action, which registers the Subgraph and makes it discoverable by Indexers.
+- Published Subgraphs have a corresponding NFT, which defines the ownership of the Subgraph. You can [transfer the Subgraph's ownership](/subgraphs/developing/managing/transferring-a-subgraph/) by sending the NFT.
+- Published Subgraphs have associated metadata, which provides other network participants with useful context and information.
### Add Curation Signal for Indexing
-Published subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph.
+Published Subgraphs are unlikely to be picked up by Indexers without curation signal. To encourage indexing you should add signal to your Subgraph. Learn more about signaling and [curating](/resources/roles/curating/) on The Graph.
#### What is signal?
-- Signal is locked GRT associated with a given subgraph. It indicates to Indexers that a given subgraph will receive query volume and it contributes to the indexing rewards available for processing it.
-- Third-party Curators may also signal on a given subgraph, if they deem the subgraph likely to drive query volume.
+- Signal is locked GRT associated with a given Subgraph. It indicates to Indexers that a given Subgraph will receive query volume and it contributes to the indexing rewards available for processing it.
+- Third-party Curators may also signal on a given Subgraph, if they deem the Subgraph likely to drive query volume.
### Querying & Application Development
Subgraphs on The Graph Network receive 100,000 free queries per month, after which point developers can either [pay for queries with GRT or a credit card](/subgraphs/billing/).
-Learn more about [querying subgraphs](/subgraphs/querying/introduction/).
+Learn more about [querying Subgraphs](/subgraphs/querying/introduction/).
### Updating Subgraphs
-To update your subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing.
+To update your Subgraph with bug fixes or new functionalities, initiate a transaction to point it to the new version. You can deploy new versions of your Subgraphs to [Subgraph Studio](https://thegraph.com/studio/) for development and testing.
-- If you selected "auto-migrate" when you applied the signal, updating the subgraph will migrate any signal to the new version and incur a migration tax.
-- This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying.
+- If you selected "auto-migrate" when you applied the signal, updating the Subgraph will migrate any signal to the new version and incur a migration tax.
+- This signal migration should prompt Indexers to start indexing the new version of the Subgraph, so it should soon become available for querying.
### Deleting & Transferring Subgraphs
-If you no longer need a published subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a subgraph returns any signaled GRT to [Curators](/resources/roles/curating/).
+If you no longer need a published Subgraph, you can [delete](/subgraphs/developing/managing/deleting-a-subgraph/) or [transfer](/subgraphs/developing/managing/transferring-a-subgraph/) it. Deleting a Subgraph returns any signaled GRT to [Curators](/resources/roles/curating/).
diff --git a/website/src/pages/cs/subgraphs/explorer.mdx b/website/src/pages/cs/subgraphs/explorer.mdx
index b679cdbb8c43..2d918567ee9d 100644
--- a/website/src/pages/cs/subgraphs/explorer.mdx
+++ b/website/src/pages/cs/subgraphs/explorer.mdx
@@ -2,11 +2,11 @@
title: Průzkumník grafů
---
-Unlock the world of subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer).
+Unlock the world of Subgraphs and network data with [Graph Explorer](https://thegraph.com/explorer).
## Přehled
-Graph Explorer consists of multiple parts where you can interact with [subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile.
+Graph Explorer consists of multiple parts where you can interact with [Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one), [delegate](https://thegraph.com/explorer/delegate?chain=arbitrum-one), engage [participants](https://thegraph.com/explorer/participants?chain=arbitrum-one), view [network information](https://thegraph.com/explorer/network?chain=arbitrum-one), and access your user profile.
## Inside Explorer
@@ -14,33 +14,33 @@ The following is a breakdown of all the key features of Graph Explorer. For addi
### Subgraphs Page
-After deploying and publishing your subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following:
+After deploying and publishing your Subgraph in Subgraph Studio, go to [Graph Explorer](https://thegraph.com/explorer) and click on the "[Subgraphs](https://thegraph.com/explorer?chain=arbitrum-one)" link in the navigation bar to access the following:
-- Your own finished subgraphs
+- Your own finished Subgraphs
- Subgraphs published by others
-- The exact subgraph you want (based on the date created, signal amount, or name).
+- The exact Subgraph you want (based on the date created, signal amount, or name).

-When you click into a subgraph, you will be able to do the following:
+When you click into a Subgraph, you will be able to do the following:
- Test queries in the playground and be able to leverage network details to make informed decisions.
-- Signal GRT on your own subgraph or the subgraphs of others to make indexers aware of its importance and quality.
+- Signal GRT on your own Subgraph or the Subgraphs of others to make indexers aware of its importance and quality.
- - This is critical because signaling on a subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries.
+ - This is critical because signaling on a Subgraph incentivizes it to be indexed, meaning it’ll eventually surface on the network to serve queries.

-On each subgraph’s dedicated page, you can do the following:
+On each Subgraph’s dedicated page, you can do the following:
-- Signál/nesignál na podgraf
+- Signal/Un-signal on Subgraphs
- Zobrazit další podrobnosti, například grafy, ID aktuálního nasazení a další metadata
-- Přepínání verzí pro zkoumání minulých iterací podgrafu
-- Dotazování na podgrafy prostřednictvím GraphQL
-- Testování podgrafů na hřišti
-- Zobrazení indexátorů, které indexují na určitém podgrafu
+- Switch versions to explore past iterations of the Subgraph
+- Query Subgraphs via GraphQL
+- Test Subgraphs in the playground
+- View the Indexers that are indexing on a certain Subgraph
- Statistiky podgrafů (alokace, kurátoři atd.)
-- Zobrazení subjektu, který podgraf zveřejnil
+- View the entity who published the Subgraph

@@ -53,7 +53,7 @@ On this page, you can see the following:
- Indexers who collected the most query fees
- Indexers with the highest estimated APR
-Additionally, you can calculate your ROI and search for top Indexers by name, address, or subgraph.
+Additionally, you can calculate your ROI and search for top Indexers by name, address, or Subgraph.
### Participants Page
@@ -63,9 +63,9 @@ This page provides a bird's-eye view of all "participants," which includes every

-Indexers are the backbone of the protocol. They stake on subgraphs, index them, and serve queries to anyone consuming subgraphs.
+Indexers are the backbone of the protocol. They stake on Subgraphs, index them, and serve queries to anyone consuming Subgraphs.
-In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each subgraph, and how much revenue they have made from query fees and indexing rewards.
+In the Indexers table, you can see an Indexers’ delegation parameters, their stake, how much they have staked to each Subgraph, and how much revenue they have made from query fees and indexing rewards.
**Specifics**
@@ -74,7 +74,7 @@ In the Indexers table, you can see an Indexers’ delegation parameters, their s
- Cooldown Remaining - the time remaining until the Indexer can change the above delegation parameters. Cooldown periods are set up by Indexers when they update their delegation parameters.
- Owned - This is the Indexer’s deposited stake, which may be slashed for malicious or incorrect behavior.
- Delegated - Stake from Delegators which can be allocated by the Indexer, but cannot be slashed.
-- Allocated - Stake that Indexers are actively allocating towards the subgraphs they are indexing.
+- Allocated - Stake that Indexers are actively allocating towards the Subgraphs they are indexing.
- Available Delegation Capacity - the amount of delegated stake the Indexers can still receive before they become over-delegated.
- Maximální kapacita delegování - maximální množství delegovaných podílů, které může indexátor produktivně přijmout. Nadměrný delegovaný podíl nelze použít pro alokace nebo výpočty odměn.
- Query Fees - this is the total fees that end users have paid for queries from an Indexer over all time.
@@ -90,10 +90,10 @@ To learn more about how to become an Indexer, you can take a look at the [offici
#### 2. Kurátoři
-Curators analyze subgraphs to identify which subgraphs are of the highest quality. Once a Curator has found a potentially high-quality subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which subgraphs are high quality and should be indexed.
+Curators analyze Subgraphs to identify which Subgraphs are of the highest quality. Once a Curator has found a potentially high-quality Subgraph, they can curate it by signaling on its bonding curve. In doing so, Curators let Indexers know which Subgraphs are high quality and should be indexed.
-- Curators can be community members, data consumers, or even subgraph developers who signal on their own subgraphs by depositing GRT tokens into a bonding curve.
- - By depositing GRT, Curators mint curation shares of a subgraph. As a result, they can earn a portion of the query fees generated by the subgraph they have signaled on.
+- Curators can be community members, data consumers, or even Subgraph developers who signal on their own Subgraphs by depositing GRT tokens into a bonding curve.
+ - By depositing GRT, Curators mint curation shares of a Subgraph. As a result, they can earn a portion of the query fees generated by the Subgraph they have signaled on.
- The bonding curve incentivizes Curators to curate the highest quality data sources.
In the The Curator table listed below you can see:
@@ -144,8 +144,8 @@ The overview section has both all the current network metrics and some cumulativ
A few key details to note:
-- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the subgraphs have been closed and the data they served has been validated by the consumers.
-- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days).
+- **Query fees represent the fees generated by the consumers**. They can be claimed (or not) by the Indexers after a period of at least 7 epochs (see below) after their allocations towards the Subgraphs have been closed and the data they served has been validated by the consumers.
+- **Indexing rewards represent the amount of rewards the Indexers claimed from the network issuance during the epoch.** Although the protocol issuance is fixed, the rewards only get minted once Indexers close their allocations towards the Subgraphs they’ve been indexing. So, the per-epoch number of rewards varies (ie. during some epochs, Indexers might’ve collectively closed allocations that have been open for many days).

@@ -178,15 +178,15 @@ In this section, you can view the following:
### Tab Podgrafy
-In the Subgraphs tab, you’ll see your published subgraphs.
+In the Subgraphs tab, you’ll see your published Subgraphs.
-> This will not include any subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network.
+> This will not include any Subgraphs deployed with the CLI for testing purposes. Subgraphs will only show up when they are published to the decentralized network.

### Tab Indexování
-In the Indexing tab, you’ll find a table with all the active and historical allocations towards subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer.
+In the Indexing tab, you’ll find a table with all the active and historical allocations towards Subgraphs. You will also find charts where you can see and analyze your past performance as an Indexer.
Tato část bude také obsahovat podrobnosti o vašich čistých odměnách za indexování a čistých poplatcích za dotazy. Zobrazí se následující metriky:
@@ -223,13 +223,13 @@ Nezapomeňte, že tento graf lze horizontálně posouvat, takže pokud se posune
### Tab Kurátorství
-Na kartě Kurátorství najdete všechny dílčí grafy, na které signalizujete (a které vám tak umožňují přijímat poplatky za dotazy). Signalizace umožňuje kurátorům upozornit indexátory na to, které podgrafy jsou hodnotné a důvěryhodné, a tím signalizovat, že je třeba je indexovat.
+In the Curation tab, you’ll find all the Subgraphs you’re signaling on (thus enabling you to receive query fees). Signaling allows Curators to highlight to Indexers which Subgraphs are valuable and trustworthy, thus signaling that they need to be indexed on.
Na této tab najdete přehled:
-- Všechny dílčí podgrafy, na kterých kurátor pracuje, s podrobnostmi o signálu
-- Celkové podíly na podgraf
-- Odměny za dotaz na podgraf
+- All the Subgraphs you're curating on with signal details
+- Share totals per Subgraph
+- Query rewards per Subgraph
- Aktualizováno v detailu data

diff --git a/website/src/pages/cs/subgraphs/guides/arweave.mdx b/website/src/pages/cs/subgraphs/guides/arweave.mdx
new file mode 100644
index 000000000000..dff8facf77d4
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/guides/arweave.mdx
@@ -0,0 +1,239 @@
+---
+title: Vytváření podgrafů na Arweave
+---
+
+> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave Subgraphs!
+
+V této příručce se dozvíte, jak vytvořit a nasadit subgrafy pro indexování blockchainu Arweave.
+
+## Co je Arweave?
+
+Protokol Arweave umožňuje vývojářům ukládat data trvale a to je hlavní rozdíl mezi Arweave a IPFS, kde IPFS tuto funkci postrádá; trvalé uložení a soubory uložené na Arweave nelze měnit ani mazat.
+
+Společnost Arweave již vytvořila řadu knihoven pro integraci protokolu do řady různých programovacích jazyků. Další informace naleznete zde:
+
+- [Arwiki](https://arwiki.wiki/#/en/main)
+- [Arweave Resources](https://www.arweave.org/build)
+
+## Co jsou podgrafy Arweave?
+
+The Graph allows you to build custom open APIs called "Subgraphs". Subgraphs are used to tell indexers (server operators) which data to index on a blockchain and save on their servers in order for you to be able to query it at any time using [GraphQL](https://graphql.org/).
+
+[Graph Node](https://github.com/graphprotocol/graph-node) is now able to index data on Arweave protocol. The current integration is only indexing Arweave as a blockchain (blocks and transactions), it is not indexing the stored files yet.
+
+## Vytvoření podgrafu Arweave
+
+Abyste mohli sestavit a nasadit Arweave Subgraphs, potřebujete dva balíčky:
+
+1. `@graphprotocol/graph-cli` above version 0.30.2 - This is a command-line tool for building and deploying Subgraphs. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-cli) to download using `npm`.
+2. `@graphprotocol/graph-ts` above version 0.27.0 - This is library of Subgraph-specific types. [Click here](https://www.npmjs.com/package/@graphprotocol/graph-ts) to download using `npm`.
+
+## Komponenty podgrafu
+
+There are three components of a Subgraph:
+
+### 1. Manifest - `subgraph.yaml`
+
+Definuje zdroje dat, které jsou předmětem zájmu, a způsob jejich zpracování. Arweave je nový druh datového zdroje.
+
+### 2. Schema - `schema.graphql`
+
+Zde definujete, na která data se chcete po indexování subgrafu pomocí jazyka GraphQL dotazovat. Je to vlastně podobné modelu pro API, kde model definuje strukturu těla požadavku.
+
+The requirements for Arweave Subgraphs are covered by the [existing documentation](/developing/creating-a-subgraph/#the-graphql-schema).
+
+### 3. AssemblyScript Mappings - `mapping.ts`
+
+Jedná se o logiku, která určuje, jak mají být data načtena a uložena, když někdo komunikuje se zdroji dat, kterým nasloucháte. Data se přeloží a uloží na základě schématu, které jste uvedli.
+
+During Subgraph development there are two key commands:
+
+```
+$ graph codegen # generates types from the schema file identified in the manifest
+$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder
+```
+
+## Definice podgrafu Manifest
+
+The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for an Arweave Subgraph:
+
+```yaml
+specVersion: 1.3.0
+description: Arweave Blocks Indexing
+schema:
+ file: ./schema.graphql # link to the schema file
+dataSources:
+ - kind: arweave
+ name: arweave-blocks
+ network: arweave-mainnet # The Graph only supports Arweave Mainnet
+ source:
+ owner: 'ID-OF-AN-OWNER' # The public key of an Arweave wallet
+ startBlock: 0 # set this to 0 to start indexing from chain genesis
+ mapping:
+ apiVersion: 0.0.9
+ language: wasm/assemblyscript
+ file: ./src/blocks.ts # link to the file with the Assemblyscript mappings
+ entities:
+ - Block
+ - Transaction
+ blockHandlers:
+ - handler: handleBlock # the function name in the mapping file
+ transactionHandlers:
+ - handler: handleTx # the function name in the mapping file
+```
+
+- Arweave Subgraphs introduce a new kind of data source (`arweave`)
+- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet`
+- Zdroje dat Arweave obsahují nepovinné pole source.owner, což je veřejný klíč peněženky Arweave
+
+Datové zdroje Arweave podporují dva typy zpracovatelů:
+
+- `blockHandlers` - Run on every new Arweave block. No source.owner is required.
+- `transactionHandlers` - Run on every transaction where the data source's `source.owner` is the owner. Currently an owner is required for `transactionHandlers`, if users want to process all transactions they should provide "" as the `source.owner`
+
+> Source.owner může být adresa vlastníka nebo jeho veřejný klíč.
+>
+> Transakce jsou stavebními kameny permaweb Arweave a jsou to objekty vytvořené koncovými uživateli.
+>
+> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet.
+
+## Definice schématu
+
+Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on the Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
+
+## AssemblyScript Mapování
+
+The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/).
+
+Arweave indexing introduces Arweave-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/).
+
+```tsx
+class Block {
+ timestamp: u64
+ lastRetarget: u64
+ height: u64
+ indepHash: Bytes
+ nonce: Bytes
+ previousBlock: Bytes
+ diff: Bytes
+ hash: Bytes
+ txRoot: Bytes
+ txs: Bytes[]
+ walletList: Bytes
+ rewardAddr: Bytes
+ tags: Tag[]
+ rewardPool: Bytes
+ weaveSize: Bytes
+ blockSize: Bytes
+ cumulativeDiff: Bytes
+ hashListMerkle: Bytes
+ poa: ProofOfAccess
+}
+
+class Transaction {
+ format: u32
+ id: Bytes
+ lastTx: Bytes
+ owner: Bytes
+ tags: Tag[]
+ target: Bytes
+ quantity: Bytes
+ data: Bytes
+ dataSize: Bytes
+ dataRoot: Bytes
+ signature: Bytes
+ reward: Bytes
+}
+```
+
+Block handlers receive a `Block`, while transactions receive a `Transaction`.
+
+Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings).
+
+## Nasazení podgrafu Arweave v Podgraf Studio
+
+Once your Subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command.
+
+```bash
+graph deploy --access-token
+```
+
+## Dotazování podgrafu Arweave
+
+The GraphQL endpoint for Arweave Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information.
+
+## Příklady podgrafů
+
+Here is an example Subgraph for reference:
+
+- [Example Subgraph for Arweave](https://github.com/graphprotocol/graph-tooling/tree/main/examples/arweave-blocks-transactions)
+
+## FAQ
+
+### Can a Subgraph index Arweave and other chains?
+
+No, a Subgraph can only support data sources from one chain/network.
+
+### Mohu indexovat uložené soubory v Arweave?
+
+V současné době The Graph indexuje pouze Arweave jako blockchain (jeho bloky a transakce).
+
+### Can I identify Bundlr bundles in my Subgraph?
+
+Toto není aktuálně podporováno.
+
+### Jak mohu filtrovat transakce na určitý účet?
+
+Source.owner může být veřejný klíč uživatele nebo adresa účtu.
+
+### Jaký je aktuální formát šifrování?
+
+Data is generally passed into the mappings as Bytes, which if stored directly is returned in the Subgraph in a `hex` format (ex. block and transaction hashes). You may want to convert to a `base64` or `base64 URL`-safe format in your mappings, in order to match what is displayed in block explorers like [Arweave Explorer](https://viewblock.io/arweave/).
+
+The following `bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string` helper function can be used, and will be added to `graph-ts`:
+
+```
+const base64Alphabet = [
+ "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M",
+ "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z",
+ "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m",
+ "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z",
+ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "/"
+];
+
+const base64UrlAlphabet = [
+ "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M",
+ "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z",
+ "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m",
+ "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z",
+ "0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "-", "_"
+];
+
+function bytesToBase64(bytes: Uint8Array, urlSafe: boolean): string {
+ let alphabet = urlSafe? base64UrlAlphabet : base64Alphabet;
+
+ let result = '', i: i32, l = bytes.length;
+ for (i = 2; i < l; i += 3) {
+ result += alphabet[bytes[i - 2] >> 2];
+ result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)];
+ result += alphabet[((bytes[i - 1] & 0x0F) << 2) | (bytes[i] >> 6)];
+ result += alphabet[bytes[i] & 0x3F];
+ }
+ if (i === l + 1) { // 1 octet yet to write
+ result += alphabet[bytes[i - 2] >> 2];
+ result += alphabet[(bytes[i - 2] & 0x03) << 4];
+ if (!urlSafe) {
+ result += "==";
+ }
+ }
+ if (!urlSafe && i === l) { // 2 octets yet to write
+ result += alphabet[bytes[i - 2] >> 2];
+ result += alphabet[((bytes[i - 2] & 0x03) << 4) | (bytes[i - 1] >> 4)];
+ result += alphabet[(bytes[i - 1] & 0x0F) << 2];
+ if (!urlSafe) {
+ result += "=";
+ }
+ }
+ return result;
+}
+```
diff --git a/website/src/pages/cs/subgraphs/guides/contract-analyzer.mdx b/website/src/pages/cs/subgraphs/guides/contract-analyzer.mdx
new file mode 100644
index 000000000000..9f53796b8066
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/guides/contract-analyzer.mdx
@@ -0,0 +1,117 @@
+---
+title: Smart Contract Analysis with Cana CLI
+---
+
+Improve smart contract analysis with **Cana CLI**. It's fast, efficient, and designed specifically for EVM chains.
+
+## Přehled
+
+**Cana CLI** is a command-line tool that streamlines helpful smart contract metadata analysis specific to subgraph development across multiple EVM-compatible chains. It simplifies retrieving contract details, detecting proxy implementations, extracting ABIs, and more.
+
+### Key Features
+
+With Cana CLI, you can:
+
+- Detect deployment blocks
+- Verify source code
+- Extract ABIs & event signatures
+- Identify proxy and implementation contracts
+- Support multiple chains
+
+### Prerequisites
+
+Before installing Cana CLI, make sure you have:
+
+- [Node.js v16+](https://nodejs.org/en)
+- [npm v6+](https://docs.npmjs.com/cli/v11/commands/npm-install)
+- Block explorer API keys
+
+### Installation & Setup
+
+1. Install Cana CLI
+
+Use npm to install it globally:
+
+```bash
+npm install -g contract-analyzer
+```
+
+2. Configure Cana CLI
+
+Set up a blockchain environment for analysis:
+
+```bash
+cana setup
+```
+
+During setup, you'll be prompted to provide the required block explorer API key and block explorer endpoint URL.
+
+After setup, Cana CLI creates a configuration file at `~/.contract-analyzer/config.json`. This file stores your block explorer API credentials, endpoint URLs, and chain selection preferences for future use.
+
+### Steps: Using Cana CLI for Smart Contract Analysis
+
+#### 1. Select a Chain
+
+Cana CLI supports multiple EVM-compatible chains.
+
+For a list of chains added run this command:
+
+```bash
+cana chains
+```
+
+Then select a chain with this command:
+
+```bash
+cana chains --switch
+```
+
+Once a chain is selected, all subsequent contract analyses will continue on that chain.
+
+#### 2. Basic Contract Analysis
+
+Run the following command to analyze a contract:
+
+```bash
+cana analyze 0xContractAddress
+```
+
+nebo
+
+```bash
+cana -a 0xContractAddress
+```
+
+This command fetches and displays essential contract information in the terminal using a clear, organized format.
+
+#### 3. Understanding the Output
+
+Cana CLI organizes results into the terminal and into a structured directory when detailed contract data is successfully retrieved:
+
+```
+contracts-analyzed/
+└── ContractName_chainName_YYYY-MM-DD/
+ ├── contract/ # Folder for individual contract files
+ ├── abi.json # Contract ABI
+ └── event-information.json # Event signatures and examples
+```
+
+This format makes it easy to reference contract metadata, event signatures, and ABIs for subgraph development.
+
+#### 4. Chain Management
+
+Add and manage chains:
+
+```bash
+cana setup # Add a new chain
+cana chains # List configured chains
+cana chains -s # Switch chains
+```
+
+### Troubleshooting
+
+Missing Data? Ensure that the contract address is correct, that it's verified on the block explorer, and that your API key has the required permissions.
+
+### Závěr
+
+With Cana CLI, you can efficiently analyze smart contracts, extract crucial metadata, and support subgraph development with ease.
diff --git a/website/src/pages/cs/subgraphs/guides/enums.mdx b/website/src/pages/cs/subgraphs/guides/enums.mdx
new file mode 100644
index 000000000000..7cc0e6c0ed78
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/guides/enums.mdx
@@ -0,0 +1,274 @@
+---
+title: Categorize NFT Marketplaces Using Enums
+---
+
+Use Enums to make your code cleaner and less error-prone. Here's a full example of using Enums on NFT marketplaces.
+
+## What are Enums?
+
+Enums, or enumeration types, are a specific data type that allows you to define a set of specific, allowed values.
+
+### Example of Enums in Your Schema
+
+If you're building a Subgraph to track the ownership history of tokens on a marketplace, each token might go through different ownerships, such as `OriginalOwner`, `SecondOwner`, and `ThirdOwner`. By using enums, you can define these specific ownerships, ensuring only predefined values are assigned.
+
+You can define enums in your schema, and once defined, you can use the string representation of the enum values to set an enum field on an entity.
+
+Here's what an enum definition might look like in your schema, based on the example above:
+
+```graphql
+enum TokenStatus {
+ OriginalOwner
+ SecondOwner
+ ThirdOwner
+}
+```
+
+This means that when you use the `TokenStatus` type in your schema, you expect it to be exactly one of predefined values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`, ensuring consistency and validity.
+
+To learn more about enums, check out [Creating a Subgraph](/developing/creating-a-subgraph/#enums) and [GraphQL documentation](https://graphql.org/learn/schema/#enumeration-types).
+
+## Benefits of Using Enums
+
+- **Clarity:** Enums provide meaningful names for values, making data easier to understand.
+- **Validation:** Enums enforce strict value definitions, preventing invalid data entries.
+- **Maintainability:** When you need to change or add new categories, enums allow you to do this in a focused manner.
+
+### Without Enums
+
+If you choose to define the type as a string instead of using an Enum, your code might look like this:
+
+```graphql
+type Token @entity {
+ id: ID!
+ tokenId: BigInt!
+ owner: Bytes! # Owner of the token
+ tokenStatus: String! # String field to track token status
+ timestamp: BigInt!
+}
+```
+
+In this schema, `TokenStatus` is a simple string with no specific, allowed values.
+
+#### Why is this a problem?
+
+- There's no restriction of `TokenStatus` values, so any string can be accidentally assigned. This makes it hard to ensure that only valid statuses like `OriginalOwner`, `SecondOwner`, or `ThirdOwner` are set.
+- It's easy to make typos such as `Orgnalowner` instead of `OriginalOwner`, making the data and potential queries unreliable.
+
+### With Enums
+
+Instead of assigning free-form strings, you can define an enum for `TokenStatus` with specific values: `OriginalOwner`, `SecondOwner`, or `ThirdOwner`. Using an enum ensures only allowed values are used.
+
+Enums provide type safety, minimize typo risks, and ensure consistent and reliable results.
+
+## Defining Enums for NFT Marketplaces
+
+> Note: The following guide uses the CryptoCoven NFT smart contract.
+
+To define enums for the various marketplaces where NFTs are traded, use the following in your Subgraph schema:
+
+```gql
+# Enum for Marketplaces that the CryptoCoven contract interacted with(likely a Trade/Mint)
+enum Marketplace {
+ OpenSeaV1 # Represents when a CryptoCoven NFT is traded on the marketplace
+ OpenSeaV2 # Represents when a CryptoCoven NFT is traded on the OpenSeaV2 marketplace
+ SeaPort # Represents when a CryptoCoven NFT is traded on the SeaPort marketplace
+ LooksRare # Represents when a CryptoCoven NFT is traded on the LookRare marketplace
+ # ...and other marketplaces
+}
+```
+
+## Using Enums for NFT Marketplaces
+
+Once defined, enums can be used throughout your Subgraph to categorize transactions or events.
+
+For example, when logging NFT sales, you can specify the marketplace involved in the trade using the enum.
+
+### Implementing a Function for NFT Marketplaces
+
+Here's how you can implement a function to retrieve the marketplace name from the enum as a string:
+
+```ts
+export function getMarketplaceName(marketplace: Marketplace): string {
+ // Using if-else statements to map the enum value to a string
+ if (marketplace === Marketplace.OpenSeaV1) {
+ return 'OpenSeaV1' // If the marketplace is OpenSea, return its string representation
+ } else if (marketplace === Marketplace.OpenSeaV2) {
+ return 'OpenSeaV2'
+ } else if (marketplace === Marketplace.SeaPort) {
+ return 'SeaPort' // If the marketplace is SeaPort, return its string representation
+ } else if (marketplace === Marketplace.LooksRare) {
+ return 'LooksRare' // If the marketplace is LooksRare, return its string representation
+ // ... and other market places
+ }
+}
+```
+
+## Best Practices for Using Enums
+
+- **Consistent Naming:** Use clear, descriptive names for enum values to improve readability.
+- **Centralized Management:** Keep enums in a single file for consistency. This makes enums easier to update and ensures they are the single source of truth.
+- **Documentation:** Add comments to enum to clarify their purpose and usage.
+
+## Using Enums in Queries
+
+Enums in queries help you improve data quality and make your results easier to interpret. They function as filters and response elements, ensuring consistency and reducing errors in marketplace values.
+
+**Specifics**
+
+- **Filtering with Enums:** Enums provide clear filters, allowing you to confidently include or exclude specific marketplaces.
+- **Enums in Responses:** Enums guarantee that only recognized marketplace names are returned, making the results standardized and accurate.
+
+### Sample Queries
+
+#### Query 1: Account With The Highest NFT Marketplace Interactions
+
+This query does the following:
+
+- It finds the account with the highest unique NFT marketplace interactions, which is great for analyzing cross-marketplace activity.
+- The marketplaces field uses the marketplace enum, ensuring consistent and validated marketplace values in the response.
+
+```gql
+{
+ accounts(first: 1, orderBy: uniqueMarketplacesCount, orderDirection: desc) {
+ id
+ sendCount
+ receiveCount
+ totalSpent
+ uniqueMarketplacesCount
+ marketplaces {
+ marketplace # This field returns the enum value representing the marketplace
+ }
+ }
+}
+```
+
+#### Returns
+
+This response provides account details and a list of unique marketplace interactions with enum values for standardized clarity:
+
+```gql
+{
+ "data": {
+ "accounts": [
+ {
+ "id": "0xb3abc96cb9a61576c03c955d75b703a890a14aa0",
+ "sendCount": "44",
+ "receiveCount": "44",
+ "totalSpent": "1197500000000000000",
+ "uniqueMarketplacesCount": "7",
+ "marketplaces": [
+ {
+ "marketplace": "OpenSeaV1"
+ },
+ {
+ "marketplace": "OpenSeaV2"
+ },
+ {
+ "marketplace": "GenieSwap"
+ },
+ {
+ "marketplace": "CryptoCoven"
+ },
+ {
+ "marketplace": "Unknown"
+ },
+ {
+ "marketplace": "LooksRare"
+ },
+ {
+ "marketplace": "NFTX"
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+#### Query 2: Most Active Marketplace for CryptoCoven transactions
+
+This query does the following:
+
+- It identifies the marketplace with the highest volume of CryptoCoven transactions.
+- It uses the marketplace enum to ensure that only valid marketplace types appear in the response, adding reliability and consistency to your data.
+
+```gql
+{
+ marketplaceInteractions(first: 1, orderBy: transactionCount, orderDirection: desc) {
+ marketplace
+ transactionCount
+ }
+}
+```
+
+#### Result 2
+
+The expected response includes the marketplace and the corresponding transaction count, using the enum to indicate the marketplace type:
+
+```gql
+{
+ "data": {
+ "marketplaceInteractions": [
+ {
+ "marketplace": "Unknown",
+ "transactionCount": "222"
+ }
+ ]
+ }
+}
+```
+
+#### Query 3: Marketplace Interactions with High Transaction Counts
+
+This query does the following:
+
+- It retrieves the top four marketplaces with over 100 transactions, excluding "Unknown" marketplaces.
+- It uses enums as filters to ensure that only valid marketplace types are included, increasing accuracy.
+
+```gql
+{
+ marketplaceInteractions(
+ first: 4
+ orderBy: transactionCount
+ orderDirection: desc
+ where: { transactionCount_gt: "100", marketplace_not: "Unknown" }
+ ) {
+ marketplace
+ transactionCount
+ }
+}
+```
+
+#### Result 3
+
+Expected output includes the marketplaces that meet the criteria, each represented by an enum value:
+
+```gql
+{
+ "data": {
+ "marketplaceInteractions": [
+ {
+ "marketplace": "NFTX",
+ "transactionCount": "201"
+ },
+ {
+ "marketplace": "OpenSeaV1",
+ "transactionCount": "148"
+ },
+ {
+ "marketplace": "CryptoCoven",
+ "transactionCount": "117"
+ },
+ {
+ "marketplace": "OpenSeaV1",
+ "transactionCount": "111"
+ }
+ ]
+ }
+}
+```
+
+## Další zdroje
+
+For additional information, check out this guide's [repo](https://github.com/chidubemokeke/Subgraph-Tutorial-Enums).
diff --git a/website/src/pages/cs/subgraphs/guides/grafting.mdx b/website/src/pages/cs/subgraphs/guides/grafting.mdx
new file mode 100644
index 000000000000..a7bad43c9c1f
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/guides/grafting.mdx
@@ -0,0 +1,202 @@
+---
+title: Nahrazení smlouvy a zachování její historie pomocí roubování
+---
+
+In this guide, you will learn how to build and deploy new Subgraphs by grafting existing Subgraphs.
+
+## Co je to roubování?
+
+Grafting reuses the data from an existing Subgraph and starts indexing it at a later block. This is useful during development to get past simple errors in the mappings quickly or to temporarily get an existing Subgraph working again after it has failed. Also, it can be used when adding a feature to a Subgraph that takes long to index from scratch.
+
+The grafted Subgraph can use a GraphQL schema that is not identical to the one of the base Subgraph, but merely compatible with it. It has to be a valid Subgraph schema in its own right, but may deviate from the base Subgraph's schema in the following ways:
+
+- Přidává nebo odebírá typy entit
+- Odstraňuje atributy z typů entit
+- Přidává nulovatelné atributy k typům entit
+- Mění nenulovatelné atributy na nulovatelné atributy
+- Přidává hodnoty do enums
+- Přidává nebo odebírá rozhraní
+- Mění se, pro které typy entit je rozhraní implementováno
+
+Další informace naleznete na:
+
+- [Grafting](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs)
+
+In this tutorial, we will be covering a basic use case. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing Subgraph onto the "base" Subgraph that tracks the new contract.
+
+## Důležité upozornění k roubování při aktualizaci na síť
+
+> **Caution**: It is recommended to not use grafting for Subgraphs published to The Graph Network
+
+### Proč je to důležité?
+
+Grafting is a powerful feature that allows you to "graft" one Subgraph onto another, effectively transferring historical data from the existing Subgraph to a new version. It is not possible to graft a Subgraph from The Graph Network back to Subgraph Studio.
+
+### Osvědčené postupy
+
+**Initial Migration**: when you first deploy your Subgraph to the decentralized network, do so without grafting. Ensure that the Subgraph is stable and functioning as expected.
+
+**Subsequent Updates**: once your Subgraph is live and stable on the decentralized network, you may use grafting for future versions to make the transition smoother and to preserve historical data.
+
+Dodržováním těchto pokynů minimalizujete rizika a zajistíte hladší průběh migrace.
+
+## Vytvoření existujícího podgrafu
+
+Building Subgraphs is an essential part of The Graph, described more in depth [here](/subgraphs/quick-start/). To be able to build and deploy the existing Subgraph used in this tutorial, the following repo is provided:
+
+- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial)
+
+> Note: The contract used in the Subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit).
+
+## Definice podgrafu Manifest
+
+The Subgraph manifest `subgraph.yaml` identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest that you will use:
+
+```yaml
+specVersion: 1.3.0
+schema:
+ file: ./schema.graphql
+dataSources:
+ - kind: ethereum
+ name: Lock
+ network: sepolia
+ source:
+ address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63'
+ abi: Lock
+ startBlock: 5955690
+ mapping:
+ kind: ethereum/events
+ apiVersion: 0.0.9
+ language: wasm/assemblyscript
+ entities:
+ - Withdrawal
+ abis:
+ - name: Lock
+ file: ./abis/Lock.json
+ eventHandlers:
+ - event: Withdrawal(uint256,uint256)
+ handler: handleWithdrawal
+ file: ./src/lock.ts
+```
+
+- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract
+- The network should correspond to an indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia`
+- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted.
+
+## Definice manifestu roubování
+
+Grafting requires adding two new items to the original Subgraph manifest:
+
+```yaml
+---
+features:
+ - grafting # feature name
+graft:
+ base: Qm... # Subgraph ID of base Subgraph
+ block: 5956000 # block number
+```
+
+- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features).
+- `graft:` is a map of the `base` Subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base Subgraph up to and including the given block and then continue indexing the new Subgraph from that block on.
+
+The `base` and `block` values can be found by deploying two Subgraphs: one for the base indexing and one with grafting
+
+## Nasazení základního podgrafu
+
+1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-example`
+2. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-example` folder from the repo
+3. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground
+
+```graphql
+{
+ withdrawals(first: 5) {
+ id
+ amount
+ when
+ }
+}
+```
+
+Vrátí něco takového:
+
+```
+{
+ "data": {
+ "withdrawals": [
+ {
+ "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000",
+ "amount": "0",
+ "when": "1716394824"
+ },
+ {
+ "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000",
+ "amount": "0",
+ "when": "1716394848"
+ }
+ ]
+ }
+}
+```
+
+Once you have verified the Subgraph is indexing properly, you can quickly update the Subgraph with grafting.
+
+## Nasazení podgrafu roubování
+
+Náhradní podgraf.yaml bude mít novou adresu smlouvy. K tomu může dojít při aktualizaci dapp, novém nasazení kontraktu atd.
+
+1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a Subgraph on Sepolia testnet called `graft-replacement`
+2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old Subgraph. The `base` Subgraph ID is the `Deployment ID` of your original `graph-example` Subgraph. You can find this in Subgraph Studio.
+3. Follow the directions in the `AUTH & DEPLOY` section on your Subgraph page in the `graft-replacement` folder from the repo
+4. Once finished, verify the Subgraph is indexing properly. If you run the following command in The Graph Playground
+
+```graphql
+{
+ withdrawals(first: 5) {
+ id
+ amount
+ when
+ }
+}
+```
+
+Měla by vrátit následující:
+
+```
+{
+ "data": {
+ "withdrawals": [
+ {
+ "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000",
+ "amount": "0",
+ "when": "1716394824"
+ },
+ {
+ "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000",
+ "amount": "0",
+ "when": "1716394848"
+ },
+ {
+ "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000",
+ "amount": "0",
+ "when": "1716429732"
+ }
+ ]
+ }
+}
+```
+
+You can see that the `graft-replacement` Subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` Subgraph.
+
+Congrats! You have successfully grafted a Subgraph onto another Subgraph.
+
+## Další zdroje
+
+If you want more experience with grafting, here are a few examples for popular contracts:
+
+- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml)
+- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml)
+- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml),
+
+To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results
+
+> Note: A lot of material from this article was taken from the previously published [Arweave article](/subgraphs/cookbook/arweave/)
diff --git a/website/src/pages/cs/subgraphs/guides/near.mdx b/website/src/pages/cs/subgraphs/guides/near.mdx
new file mode 100644
index 000000000000..275c2aba0fd4
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/guides/near.mdx
@@ -0,0 +1,283 @@
+---
+title: Vytváření podgrafů v NEAR
+---
+
+This guide is an introduction to building Subgraphs indexing smart contracts on the [NEAR blockchain](https://docs.near.org/).
+
+## Co je NEAR?
+
+[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information.
+
+## What are NEAR Subgraphs?
+
+The Graph gives developers tools to process blockchain events and make the resulting data easily available via a GraphQL API, known individually as a Subgraph. [Graph Node](https://github.com/graphprotocol/graph-node) is now able to process NEAR events, which means that NEAR developers can now build Subgraphs to index their smart contracts.
+
+Subgraphs are event-based, which means that they listen for and then process onchain events. There are currently two types of handlers supported for NEAR Subgraphs:
+
+- Obsluhy bloků: jsou spouštěny při každém novém bloku.
+- Obsluhy příjmu: spouštějí se pokaždé, když je zpráva provedena na zadaném účtu.
+
+[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt):
+
+> Příjemka je jediným objektem, který lze v systému použít. Když na platformě NEAR hovoříme o "zpracování transakce", znamená to v určitém okamžiku "použití účtenky".
+
+## Sestavení podgrafu NEAR
+
+`@graphprotocol/graph-cli` is a command-line tool for building and deploying Subgraphs.
+
+`@graphprotocol/graph-ts` is a library of Subgraph-specific types.
+
+NEAR Subgraph development requires `graph-cli` above version `0.23.0`, and `graph-ts` above version `0.23.0`.
+
+> Building a NEAR Subgraph is very similar to building a Subgraph that indexes Ethereum.
+
+There are three aspects of Subgraph definition:
+
+**subgraph.yaml:** the Subgraph manifest, defining the data sources of interest, and how they should be processed. NEAR is a new `kind` of data source.
+
+**schema.graphql:** a schema file that defines what data is stored for your Subgraph, and how to query it via GraphQL. The requirements for NEAR Subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph/#the-graphql-schema).
+
+**AssemblyScript Mappings:** [AssemblyScript code](/subgraphs/developing/creating/graph-ts/api/) that translates from the event data to the entities defined in your schema. NEAR support introduces NEAR-specific data types and new JSON parsing functionality.
+
+During Subgraph development there are two key commands:
+
+```bash
+$ graph codegen # generates types from the schema file identified in the manifest
+$ graph build # generates Web Assembly from the AssemblyScript files, and prepares all the Subgraph files in a /build folder
+```
+
+### Definice podgrafu Manifest
+
+The Subgraph manifest (`subgraph.yaml`) identifies the data sources for the Subgraph, the triggers of interest, and the functions that should be run in response to those triggers. See below for an example Subgraph manifest for a NEAR Subgraph:
+
+```yaml
+specVersion: 1.3.0
+schema:
+ file: ./src/schema.graphql # link to the schema file
+dataSources:
+ - kind: near
+ network: near-mainnet
+ source:
+ account: app.good-morning.near # This data source will monitor this account
+ startBlock: 10662188 # Required for NEAR
+ mapping:
+ apiVersion: 0.0.9
+ language: wasm/assemblyscript
+ blockHandlers:
+ - handler: handleNewBlock # the function name in the mapping file
+ receiptHandlers:
+ - handler: handleReceipt # the function name in the mapping file
+ file: ./src/mapping.ts # link to the file with the Assemblyscript mappings
+```
+
+- NEAR Subgraphs introduce a new `kind` of data source (`near`)
+- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet`
+- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account.
+- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted.
+
+```yaml
+accounts:
+ prefixes:
+ - app
+ - good
+ suffixes:
+ - morning.near
+ - morning.testnet
+```
+
+Zdroje dat NEAR podporují dva typy zpracovatelů:
+
+- `blockHandlers`: run on every new NEAR block. No `source.account` is required.
+- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources).
+
+### Definice schématu
+
+Schema definition describes the structure of the resulting Subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on Subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
+
+### AssemblyScript Mapování
+
+The handlers for processing events are written in [AssemblyScript](https://www.assemblyscript.org/).
+
+NEAR indexing introduces NEAR-specific data types to the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/).
+
+```typescript
+
+class ExecutionOutcome {
+ gasBurnt: u64,
+ blockHash: Bytes,
+ id: Bytes,
+ logs: Array,
+ receiptIds: Array,
+ tokensBurnt: BigInt,
+ executorId: string,
+ }
+
+class ActionReceipt {
+ predecessorId: string,
+ receiverId: string,
+ id: CryptoHash,
+ signerId: string,
+ gasPrice: BigInt,
+ outputDataReceivers: Array,
+ inputDataIds: Array,
+ actions: Array,
+ }
+
+class BlockHeader {
+ height: u64,
+ prevHeight: u64,// Always zero when version < V3
+ epochId: Bytes,
+ nextEpochId: Bytes,
+ chunksIncluded: u64,
+ hash: Bytes,
+ prevHash: Bytes,
+ timestampNanosec: u64,
+ randomValue: Bytes,
+ gasPrice: BigInt,
+ totalSupply: BigInt,
+ latestProtocolVersion: u32,
+ }
+
+class ChunkHeader {
+ gasUsed: u64,
+ gasLimit: u64,
+ shardId: u64,
+ chunkHash: Bytes,
+ prevBlockHash: Bytes,
+ balanceBurnt: BigInt,
+ }
+
+class Block {
+ author: string,
+ header: BlockHeader,
+ chunks: Array,
+ }
+
+class ReceiptWithOutcome {
+ outcome: ExecutionOutcome,
+ receipt: ActionReceipt,
+ block: Block,
+ }
+```
+
+These types are passed to block & receipt handlers:
+
+- Block handlers will receive a `Block`
+- Receipt handlers will receive a `ReceiptWithOutcome`
+
+Otherwise, the rest of the [AssemblyScript API](/subgraphs/developing/creating/graph-ts/api/) is available to NEAR Subgraph developers during mapping execution.
+
+This includes a new JSON parsing function - logs on NEAR are frequently emitted as stringified JSONs. A new `json.fromString(...)` function is available as part of the [JSON API](/subgraphs/developing/creating/graph-ts/api/#json-api) to allow developers to easily process these logs.
+
+## Nasazení podgrafu NEAR
+
+Once you have a built Subgraph, it is time to deploy it to Graph Node for indexing. NEAR Subgraphs can be deployed to any Graph Node `>=v0.26.x` (this version has not yet been tagged & released).
+
+Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names:
+
+- `near-mainnet`
+- `near-testnet`
+
+More information on creating and deploying Subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio/).
+
+As a quick primer - the first step is to "create" your Subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a Subgraph".
+
+Once your Subgraph has been created, you can deploy your Subgraph by using the `graph deploy` CLI command:
+
+```sh
+$ graph create --node # creates a Subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
+$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the Subgraph to a specified Graph Node based on the manifest IPFS hash
+```
+
+The node configuration will depend on where the Subgraph is being deployed.
+
+### Podgraf Studio
+
+```sh
+graph auth
+graph deploy
+```
+
+### Místní uzel grafu (na základě výchozí konfigurace)
+
+```sh
+graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001
+```
+
+Once your Subgraph has been deployed, it will be indexed by Graph Node. You can check its progress by querying the Subgraph itself:
+
+```graphql
+{
+ _meta {
+ block {
+ number
+ }
+ }
+}
+```
+
+### Indexování NEAR pomocí místního uzlu grafu
+
+Spuštění grafu uzlu, který indexuje NEAR, má následující provozní požadavky:
+
+- Framework NEAR Indexer s instrumentací Firehose
+- Komponenta(y) NEAR Firehose
+- Uzel Graph s nakonfigurovaným koncovým bodem Firehose
+
+Brzy vám poskytneme další informace o provozu výše uvedených komponent.
+
+## Dotazování podgrafu NEAR
+
+The GraphQL endpoint for NEAR Subgraphs is determined by the schema definition, with the existing API interface. Please visit the [GraphQL API documentation](/subgraphs/querying/graphql-api/) for more information.
+
+## Příklady podgrafů
+
+Here are some example Subgraphs for reference:
+
+[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks)
+
+[NEAR Receipts](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-receipts)
+
+## FAQ
+
+### Jak funguje beta verze?
+
+NEAR support is in beta, which means that there may be changes to the API as we continue to work on improving the integration. Please email near@thegraph.com so that we can support you in building NEAR Subgraphs, and keep you up to date on the latest developments!
+
+### Can a Subgraph index both NEAR and EVM chains?
+
+No, a Subgraph can only support data sources from one chain/network.
+
+### Can Subgraphs react to more specific triggers?
+
+V současné době jsou podporovány pouze spouštěče Blok a Příjem. Zkoumáme spouštěče pro volání funkcí na zadaném účtu. Máme také zájem o podporu spouštěčů událostí, jakmile bude mít NEAR nativní podporu událostí.
+
+### Budou se obsluhy příjmu spouštět pro účty a jejich podúčty?
+
+If an `account` is specified, that will only match the exact account name. It is possible to match sub-accounts by specifying an `accounts` field, with `suffixes` and `prefixes` specified to match accounts and sub-accounts, for example the following would match all `mintbase1.near` sub-accounts:
+
+```yaml
+accounts:
+ suffixes:
+ - mintbase1.near
+```
+
+### Can NEAR Subgraphs make view calls to NEAR accounts during mappings?
+
+To není podporováno. Vyhodnocujeme, zda je tato funkce pro indexování nutná.
+
+### Can I use data source templates in my NEAR Subgraph?
+
+Tato funkce není v současné době podporována. Vyhodnocujeme, zda je tato funkce pro indexování nutná.
+
+### Ethereum Subgraphs support "pending" and "current" versions, how can I deploy a "pending" version of a NEAR Subgraph?
+
+Pending functionality is not yet supported for NEAR Subgraphs. In the interim, you can deploy a new version to a different "named" Subgraph, and then when that is synced with the chain head, you can redeploy to your primary "named" Subgraph, which will use the same underlying deployment ID, so the main Subgraph will be instantly synced.
+
+### My question hasn't been answered, where can I get more help building NEAR Subgraphs?
+
+If it is a general question about Subgraph development, there is a lot more information in the rest of the [Developer documentation](/subgraphs/quick-start/). Otherwise please join [The Graph Protocol Discord](https://discord.gg/graphprotocol) and ask in the #near channel or email near@thegraph.com.
+
+## Odkazy:
+
+- [NEAR developer documentation](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton)
diff --git a/website/src/pages/cs/subgraphs/guides/polymarket.mdx b/website/src/pages/cs/subgraphs/guides/polymarket.mdx
new file mode 100644
index 000000000000..74efe387b0d7
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/guides/polymarket.mdx
@@ -0,0 +1,148 @@
+---
+title: Querying Blockchain Data from Polymarket with Subgraphs on The Graph
+sidebarTitle: Query Polymarket Data
+---
+
+Query Polymarket’s onchain data using GraphQL via Subgraphs on The Graph Network. Subgraphs are decentralized APIs powered by The Graph, a protocol for indexing & querying data from blockchains.
+
+## Polymarket Subgraph on Graph Explorer
+
+You can see an interactive query playground on the [Polymarket Subgraph’s page on The Graph Explorer](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one), where you can test any query.
+
+
+
+## How to use the Visual Query Editor
+
+The visual query editor helps you test sample queries from your Subgraph.
+
+You can use the GraphiQL Explorer to compose your GraphQL queries by clicking on the fields you want.
+
+### Example Query: Get the top 5 highest payouts from Polymarket
+
+```
+{
+ redemptions(orderBy: payout, orderDirection: desc, first: 5) {
+ payout
+ redeemer
+ id
+ timestamp
+ }
+}
+```
+
+### Example output
+
+```
+{
+ "data": {
+ "redemptions": [
+ {
+ "id": "0x8fbb68b7c0cbe9aca6024d063a843a23d046b5522270fd25c6a81c511cf517d1_0x3b",
+ "payout": "6274509531681",
+ "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c",
+ "timestamp": "1722929672"
+ },
+ {
+ "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0x7",
+ "payout": "2246253575996",
+ "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c",
+ "timestamp": "1726701528"
+ },
+ {
+ "id": "0x983b71c64b5075fc1179f4e03849af9c727be60de71c9e86e37ad0b3e43f9db9_0x26",
+ "payout": "2135448291991",
+ "redeemer": "0x5a181dcf3eb53a09fb32b20a5a9312fb8d26f689",
+ "timestamp": "1704932625"
+ },
+ {
+ "id": "0x2b2826448fcacde7931828cfcd3cc4aaeac8080fdff1e91363f0589c9b503eca_0xa",
+ "payout": "1917395333835",
+ "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c",
+ "timestamp": "1726701528"
+ },
+ {
+ "id": "0xfe82e117201f5169abc822281ccf0469e6b3740fcb4e799d1b599f83b8f11656_0x30",
+ "payout": "1862505580000",
+ "redeemer": "0xfffe4013adfe325c6e02d36dc66e091f5476f52c",
+ "timestamp": "1722929866"
+ }
+ ]
+ }
+}
+```
+
+## Polymarket's GraphQL Schema
+
+The schema for this Subgraph is defined [here in Polymarket’s GitHub](https://github.com/Polymarket/polymarket-subgraph/blob/main/polymarket-subgraph/schema.graphql).
+
+### Polymarket Subgraph Endpoint
+
+https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp
+
+The Polymarket Subgraph endpoint is available on [Graph Explorer](https://thegraph.com/explorer).
+
+
+
+## How to Get your own API Key
+
+1. Go to [https://thegraph.com/studio](http://thegraph.com/studio) and connect your wallet
+2. Go to https://thegraph.com/studio/apikeys/ to create an API key
+
+You can use this API key on any Subgraph in [Graph Explorer](https://thegraph.com/explorer), and it’s not limited to just Polymarket.
+
+100k queries per month are free which is perfect for your side project!
+
+## Additional Polymarket Subgraphs
+
+- [Polymarket](https://thegraph.com/explorer/subgraphs/81Dm16JjuFSrqz813HysXoUPvzTwE7fsfPk2RTf66nyC?view=Query&chain=arbitrum-one)
+- [Polymarket Activity Polygon](https://thegraph.com/explorer/subgraphs/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp?view=Query&chain=arbitrum-one)
+- [Polymarket Profit & Loss](https://thegraph.com/explorer/subgraphs/6c58N5U4MtQE2Y8njfVrrAfRykzfqajMGeTMEvMmskVz?view=Query&chain=arbitrum-one)
+- [Polymarket Open Interest](https://thegraph.com/explorer/subgraphs/ELaW6RtkbmYNmMMU6hEPsghG9Ko3EXSmiRkH855M4qfF?view=Query&chain=arbitrum-one)
+
+## How to Query with the API
+
+You can pass any GraphQL query to the Polymarket endpoint and receive data in json format.
+
+This following code example will return the exact same output as above.
+
+### Sample Code from node.js
+
+```
+const axios = require('axios');
+
+const graphqlQuery = `{
+ positions(first: 5) {
+ condition
+ outcomeIndex
+ }
+};
+
+const queryUrl = 'https://gateway.thegraph.com/api/{api-key}/subgraphs/id/Bx1W4S7kDVxs9gC3s2G6DS8kdNBJNVhMviCtin2DiBp'
+
+const graphQLRequest = {
+ method: 'post',
+ url: queryUrl,
+ data: {
+ query: graphqlQuery,
+ },
+};
+
+// Send the GraphQL query
+axios(graphQLRequest)
+ .then((response) => {
+ // Handle the response here
+ const data = response.data.data
+ console.log(data)
+
+ })
+ .catch((error) => {
+ // Handle any errors
+ console.error(error);
+ });
+```
+
+### Additional resources
+
+For more information about querying data from your Subgraph, read more [here](/subgraphs/querying/introduction/).
+
+To explore all the ways you can optimize & customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/).
diff --git a/website/src/pages/cs/subgraphs/guides/secure-api-keys-nextjs.mdx b/website/src/pages/cs/subgraphs/guides/secure-api-keys-nextjs.mdx
new file mode 100644
index 000000000000..d311cfa5117e
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/guides/secure-api-keys-nextjs.mdx
@@ -0,0 +1,123 @@
+---
+title: Jak zabezpečit klíče API pomocí komponent serveru Next.js
+---
+
+## Přehled
+
+We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain Subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key).
+
+In this cookbook, we will go over how to create a Next.js server component that queries a Subgraph while also hiding the API key from the frontend.
+
+### Upozornění
+
+- Součásti serveru Next.js nechrání klíče API před odčerpáním pomocí útoků typu odepření služby.
+- Brány Graf síť mají zavedené strategie detekce a zmírňování odepření služby, avšak použití serverových komponent může tyto ochrany oslabit.
+- Server komponenty Next.js přinášejí rizika centralizace, protože může dojít k výpadku serveru.
+
+### Proč je to důležité
+
+Ve standardní aplikaci React mohou být klíče API obsažené v kódu frontendu vystaveny na straně klienta, což představuje bezpečnostní riziko. Soubory `.env` se sice běžně používají, ale plně klíče nechrání, protože kód Reactu se spouští na straně klienta a vystavuje klíč API v hlavičkách. Serverové komponenty Next.js tento problém řeší tím, že citlivé operace zpracovávají na straně serveru.
+
+### Using client-side rendering to query a Subgraph
+
+
+
+### Prerequisites
+
+- Klíč API od [Subgraph Studio](https://thegraph.com/studio)
+- Základní znalosti Next.js a React.
+- Existující projekt Next.js, který používá [App Router](https://nextjs.org/docs/app).
+
+## Kuchařka krok za krokem
+
+### Krok 1: Nastavení proměnných prostředí
+
+1. V kořeni našeho projektu Next.js vytvořte soubor `.env.local`.
+2. Přidejte náš klíč API: `API_KEY=`.
+
+### Krok 2: Vytvoření součásti serveru
+
+1. V adresáři `components` vytvořte nový soubor `ServerComponent.js`.
+2. K nastavení komponenty serveru použijte přiložený ukázkový kód.
+
+### Krok 3: Implementace požadavku API na straně serveru
+
+Do souboru `ServerComponent.js` přidejte následující kód:
+
+```javascript
+const API_KEY = process.env.API_KEY
+
+export default async function ServerComponent() {
+ const response = await fetch(
+ `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`,
+ {
+ method: 'POST',
+ headers: {
+ 'Content-Type': 'application/json',
+ },
+ body: JSON.stringify({
+ query: /* GraphQL */ `
+ {
+ factories(first: 5) {
+ id
+ poolCount
+ txCount
+ totalVolumeUSD
+ }
+ }
+ `,
+ }),
+ },
+ )
+
+ const responseData = await response.json()
+ const data = responseData.data
+
+ return (
+
+
Server Component
+ {data ? (
+
+ {data.factories.map((factory) => (
+
+
ID: {factory.id}
+
Pool Count: {factory.poolCount}
+
Transaction Count: {factory.txCount}
+
Total Volume USD: {factory.totalVolumeUSD}
+
+ ))}
+
+ ) : (
+
Loading data...
+ )}
+
+ )
+}
+```
+
+### Krok 4: Použití komponenty serveru
+
+1. V našem souboru stránky (např. `pages/index.js`) importujte `ServerComponent`.
+2. Vykreslení komponenty:
+
+```javascript
+import ServerComponent from './components/ServerComponent'
+
+export default function Home() {
+ return (
+
+
+
+ )
+}
+```
+
+### Krok 5: Spusťte a otestujte náš Dapp
+
+Spusťte naši aplikaci Next.js pomocí `npm run dev`. Ověřte, že serverová komponenta načítá data bez vystavení klíče API.
+
+
+
+### Závěr
+
+By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/subgraphs/querying/managing-api-keys/) to increase your API key security even further.
diff --git a/website/src/pages/cs/subgraphs/guides/subgraph-composition.mdx b/website/src/pages/cs/subgraphs/guides/subgraph-composition.mdx
new file mode 100644
index 000000000000..f5480ab15a48
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/guides/subgraph-composition.mdx
@@ -0,0 +1,132 @@
+---
+title: Aggregate Data Using Subgraph Composition
+sidebarTitle: Build a Composable Subgraph with Multiple Subgraphs
+---
+
+Leverage Subgraph composition to speed up development time. Create a base Subgraph with essential data, then build additional Subgraphs on top of it.
+
+Optimize your Subgraph by merging data from independent, source Subgraphs into a single composable Subgraph to enhance data aggregation.
+
+## Úvod
+
+Composable Subgraphs enable you to combine multiple Subgraphs' data sources into a new Subgraph, facilitating faster and more flexible Subgraph development. Subgraph composition empowers you to create and maintain smaller, focused Subgraphs that collectively form a larger, interconnected dataset.
+
+### Benefits of Composition
+
+Subgraph composition is a powerful feature for scaling, allowing you to:
+
+- Reuse, mix, and combine existing data
+- Streamline development and queries
+- Use multiple data sources (up to five source Subgraphs)
+- Speed up your Subgraph's syncing speed
+- Handle errors and optimize the resync
+
+## Architecture Overview
+
+The setup for this example involves two Subgraphs:
+
+1. **Source Subgraph**: Tracks event data as entities.
+2. **Dependent Subgraph**: Uses the source Subgraph as a data source.
+
+You can find these in the `source` and `dependent` directories.
+
+- The **source Subgraph** is a basic event-tracking Subgraph that records events emitted by relevant contracts.
+- The **dependent Subgraph** references the source Subgraph as a data source, using the entities from the source as triggers.
+
+While the source Subgraph is a standard Subgraph, the dependent Subgraph uses the Subgraph composition feature.
+
+## Prerequisites
+
+### Source Subgraphs
+
+- All Subgraphs need to be published with a **specVersion 1.3.0 or later** (Use the latest graph-cli version to be able to deploy composable Subgraphs)
+- See notes here: https://github.com/graphprotocol/graph-node/releases/tag/v0.37.0
+- Immutable entities only: All Subgraphs must have [immutable entities](https://thegraph.com/docs/en/subgraphs/best-practices/immutable-entities-bytes-as-ids/#immutable-entities) when the Subgraph is deployed
+- Pruning can be used in the source Subgraphs, but only entities that are immutable can be composed on top of
+- Source Subgraphs cannot use grafting on top of existing entities
+- Aggregated entities can be used in composition, but entities that are composed from them cannot performed additional aggregations directly
+
+### Composed Subgraphs
+
+- You can only compose up to a **maximum of 5 source Subgraphs**
+- Composed Subgraphs can only use **datasources from the same chain**
+- **Nested composition is not yet supported**: Composing on top of another composed Subgraph isn’t allowed at this time
+- Aggregated entities can be used in composition, but the composed entities on them cannot also use aggregations directly
+- Developers cannot compose an onchain datasource with a Subgraph datasource (i.e. you can’t do normal event handlers and call handlers and block handlers in a composed Subgraph)
+
+Additionally, you can explore the [example-composable-subgraph](https://github.com/graphprotocol/example-composable-subgraph) repository for a working implementation of composable Subgraphs
+
+## Začněte
+
+The following guide provides examples for defining 3 source Subgraphs to create one powerful composed Subgraph.
+
+### Specifics
+
+- To keep this example simple, all source Subgraphs use only block handlers. However, in a real environment, each source Subgraph will use data from different smart contracts.
+- The examples below show how to import and extend the schema of another Subgraph to enhance its functionality.
+- Each source Subgraph is optimized with a specific entity.
+- All the commands listed install the necessary dependencies, generate code based on the GraphQL schema, build the Subgraph, and deploy it to your local Graph Node instance.
+
+### Step 1. Deploy Block Time Source Subgraph
+
+This first source Subgraph calculates the block time for each block.
+
+- It imports schemas from other Subgraphs and adds a `block` entity with a `timestamp` field, representing the time each block was mined.
+- It listens to time-related blockchain events (e.g., block timestamps) and processes this data to update the Subgraph's entities accordingly.
+
+To deploy this Subgraph locally, run the following commands:
+
+```bash
+npm install
+npm run codegen
+npm run build
+npm run create-local
+npm run deploy-local
+```
+
+### Step 2. Deploy Block Cost Source Subgraph
+
+This second source Subgraph indexes the cost of each block.
+
+#### Key Functions
+
+- It imports schemas from other Subgraphs and adds a `block` entity with cost-related fields.
+- It listens to blockchain events related to costs (e.g. gas fees, transaction costs) and processes this data to update the Subgraph's entities accordingly.
+
+To deploy this Subgraph locally, run the same commands as above.
+
+### Step 3. Define Block Size in Source Subgraph
+
+This third source Subgraph indexes the size of each block. To deploy this Subgraph locally, run the same commands as above.
+
+#### Key Functions
+
+- It imports existing schemas from other Subgraphs and adds a `block` entity with a `size` field representing each block's size.
+- It listens to blockchain events related to block sizes (e.g., storage or volume) and processes this data to update the Subgraph's entities accordingly.
+
+### Step 4. Combine Into Block Stats Subgraph
+
+This composed Subgraph combines and aggregates the information from the source Subgraphs above, providing a unified view of block statistics. To deploy this Subgraph locally, run the same commands as above.
+
+> Note:
+>
+> - Any change to a source Subgraph will likely generate a new deployment ID.
+> - Be sure to update the deployment ID in the data source address of the Subgraph manifest to take advantage of the latest changes.
+> - All source Subgraphs should be deployed before the composed Subgraph is deployed.
+
+#### Key Functions
+
+- It provides a consolidated data model that encompasses all relevant block metrics.
+- It combines data from 3 source Subgraphs, and provides a comprehensive view of block statistics, enabling more complex queries and analyses.
+
+## Key Takeaways
+
+- This powerful tool will scale your Subgraph development and allow you to combine multiple Subgraphs.
+- The setup includes the deployment of 3 source Subgraphs and one final deployment of the composed Subgraph.
+- This feature unlocks scalability, simplifying both development and maintenance efficiency.
+
+## Další zdroje
+
+- Check out all the code for this example in [this GitHub repo](https://github.com/graphprotocol/example-composable-subgraph).
+- To add advanced features to your Subgraph, check out [Subgraph advanced features](/developing/creating/advanced/).
+- To learn more about aggregations, check out [Timeseries and Aggregations](/subgraphs/developing/creating/advanced/#timeseries-and-aggregations).
diff --git a/website/src/pages/cs/subgraphs/guides/subgraph-debug-forking.mdx b/website/src/pages/cs/subgraphs/guides/subgraph-debug-forking.mdx
new file mode 100644
index 000000000000..60ad21d2fe95
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/guides/subgraph-debug-forking.mdx
@@ -0,0 +1,101 @@
+---
+title: Rychlé a snadné ladění podgrafů pomocí vidliček
+---
+
+As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **Subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging!
+
+## Ok, co to je?
+
+**Subgraph forking** is the process of lazily fetching entities from _another_ Subgraph's store (usually a remote one).
+
+In the context of debugging, **Subgraph forking** allows you to debug your failed Subgraph at block _X_ without needing to wait to sync-up to block _X_.
+
+## Co?! Jak?
+
+When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_.
+
+In a nutshell, we are going to _fork the failing Subgraph_ from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block _X_ in order to provide the locally deployed Subgraph being debugged at block _X_ an up-to-date view of the indexing state.
+
+## Ukažte mi prosím nějaký kód!
+
+To stay focused on Subgraph debugging, let's keep things simple and run along with the [example-Subgraph](https://github.com/graphprotocol/graph-tooling/tree/main/examples/ethereum-gravatar) indexing the Ethereum Gravity smart contract.
+
+Here are the handlers defined for indexing `Gravatar`s, with no bugs whatsoever:
+
+```tsx
+export function handleNewGravatar(event: NewGravatar): void {
+ let gravatar = new Gravatar(event.params.id.toHex().toString())
+ gravatar.owner = event.params.owner
+ gravatar.displayName = event.params.displayName
+ gravatar.imageUrl = event.params.imageUrl
+ gravatar.save()
+}
+
+export function handleUpdatedGravatar(event: UpdatedGravatar): void {
+ let gravatar = Gravatar.load(event.params.id.toI32().toString())
+ if (gravatar == null) {
+ log.critical('Gravatar not found!', [])
+ return
+ }
+ gravatar.owner = event.params.owner
+ gravatar.displayName = event.params.displayName
+ gravatar.imageUrl = event.params.imageUrl
+ gravatar.save()
+}
+```
+
+Oops, how unfortunate, when I deploy my perfect looking Subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error.
+
+Obvyklý způsob, jak se pokusit o opravu, je:
+
+1. Proveďte změnu ve zdroji mapování, která podle vás problém vyřeší (zatímco já vím, že ne).
+2. Re-deploy the Subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node).
+3. Počkejte na synchronizaci.
+4. Pokud se opět rozbije, vraťte se na 1, jinak: Hurá!
+
+It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: _3. Wait for it to sync-up._
+
+Using **Subgraph forking** we can essentially eliminate this step. Here is how it looks:
+
+0. Spin-up a local Graph Node with the **_appropriate fork-base_** set.
+1. Proveďte změnu ve zdroji mapování, která podle vás problém vyřeší.
+2. Deploy to the local Graph Node, **_forking the failing Subgraph_** and **_starting from the problematic block_**.
+3. Pokud se opět rozbije, vraťte se na 1, jinak: Hurá!
+
+Nyní můžete mít 2 otázky:
+
+1. fork-base co???
+2. Vidličkování kdo?!
+
+A já odpovídám:
+
+1. `fork-base` is the "base" URL, such that when the _subgraph id_ is appended the resulting URL (`/`) is a valid GraphQL endpoint for the Subgraph's store.
+2. Vidličkování je snadné, není třeba se potit:
+
+```bash
+$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020
+```
+
+Also, don't forget to set the `dataSources.source.startBlock` field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork!
+
+Takže to dělám takhle:
+
+1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a Subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/).
+
+```
+$ cargo run -p graph-node --release -- \
+ --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \
+ --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \
+ --ipfs 127.0.0.1:5001
+ --fork-base https://api.thegraph.com/subgraphs/id/
+```
+
+2. After careful inspection I notice that there is a mismatch in the `id` representations used when indexing `Gravatar`s in my two handlers. While `handleNewGravatar` converts it to a hex (`event.params.id.toHex()`), `handleUpdatedGravatar` uses an int32 (`event.params.id.toI32()`) which causes the `handleUpdatedGravatar` to panic with "Gravatar not found!". I make them both convert the `id` to a hex.
+3. After I made the changes I deploy my Subgraph to the local Graph Node, **_forking the failing Subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`:
+
+```bash
+$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020
+```
+
+4. Zkontroluji protokoly vytvořené místním graf uzlem a hurá, zdá se, že vše funguje.
+5. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho)
diff --git a/website/src/pages/cs/subgraphs/guides/subgraph-uncrashable.mdx b/website/src/pages/cs/subgraphs/guides/subgraph-uncrashable.mdx
new file mode 100644
index 000000000000..bdc3671399e1
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/guides/subgraph-uncrashable.mdx
@@ -0,0 +1,29 @@
+---
+title: Generátor kódu bezpečného podgrafu
+---
+
+[Subgraph Uncrashable](https://float-capital.github.io/float-subgraph-uncrashable/) is a code generation tool that generates a set of helper functions from the graphql schema of a project. It ensures that all interactions with entities in your Subgraph are completely safe and consistent.
+
+## Proč se integrovat s aplikací Subgraph Uncrashable?
+
+- **Continuous Uptime**. Mishandled entities may cause Subgraphs to crash, which can be disruptive for projects that are dependent on The Graph. Set up helper functions to make your Subgraphs “uncrashable” and ensure business continuity.
+
+- **Completely Safe**. Common problems seen in Subgraph development are issues of loading undefined entities, not setting or initializing all values of entities, and race conditions on loading and saving entities. Ensure all interactions with entities are completely atomic.
+
+- **User Configurable** Set default values and configure the level of security checks that suits your individual project's needs. Warning logs are recorded indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy.
+
+**Key Features**
+
+- The code generation tool accommodates **all** Subgraph types and is configurable for users to set sane defaults on values. The code generation will use this config to generate helper functions that are to the users specification.
+
+- Framework také obsahuje způsob (prostřednictvím konfiguračního souboru), jak vytvořit vlastní, ale bezpečné funkce setteru pro skupiny proměnných entit. Tímto způsobem není možné, aby uživatel načetl/použil zastaralou entitu grafu, a také není možné zapomenout uložit nebo nastavit proměnnou, kterou funkce vyžaduje.
+
+- Warning logs are recorded as logs indicating where there is a breach of Subgraph logic to help patch the issue to ensure data accuracy.
+
+Podgraf Uncrashable lze spustit jako volitelný příznak pomocí příkazu Graph CLI codegen.
+
+```sh
+graph codegen -u [options] []
+```
+
+Visit the [Subgraph uncrashable documentation](https://float-capital.github.io/float-subgraph-uncrashable/docs/) or watch this [video tutorial](https://float-capital.github.io/float-subgraph-uncrashable/docs/tutorial) to learn more and to get started with developing safer Subgraphs.
diff --git a/website/src/pages/cs/subgraphs/guides/transfer-to-the-graph.mdx b/website/src/pages/cs/subgraphs/guides/transfer-to-the-graph.mdx
new file mode 100644
index 000000000000..510b0ea317f6
--- /dev/null
+++ b/website/src/pages/cs/subgraphs/guides/transfer-to-the-graph.mdx
@@ -0,0 +1,104 @@
+---
+title: Transfer to The Graph
+---
+
+Quickly upgrade your Subgraphs from any platform to [The Graph's decentralized network](https://thegraph.com/networks/).
+
+## Benefits of Switching to The Graph
+
+- Use the same Subgraph that your apps already use with zero-downtime migration.
+- Increase reliability from a global network supported by 100+ Indexers.
+- Receive lightning-fast support for Subgraphs 24/7, with an on-call engineering team.
+
+## Upgrade Your Subgraph to The Graph in 3 Easy Steps
+
+1. [Set Up Your Studio Environment](/subgraphs/cookbook/transfer-to-the-graph/#1-set-up-your-studio-environment)
+2. [Deploy Your Subgraph to Studio](/subgraphs/cookbook/transfer-to-the-graph/#2-deploy-your-subgraph-to-studio)
+3. [Publish to The Graph Network](/subgraphs/cookbook/transfer-to-the-graph/#publish-your-subgraph-to-the-graphs-decentralized-network)
+
+## 1. Set Up Your Studio Environment
+
+### Create a Subgraph in Subgraph Studio
+
+- Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet.
+- Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name".
+
+> Note: After publishing, the Subgraph name will be editable but requires onchain action each time, so name it properly.
+
+### Install the Graph CLI
+
+You must have [Node.js](https://nodejs.org/) and a package manager of your choice (`npm` or `pnpm`) installed to use the Graph CLI. Check for the [most recent](https://github.com/graphprotocol/graph-tooling/releases?q=%40graphprotocol%2Fgraph-cli&expanded=true) CLI version.
+
+On your local machine, run the following command:
+
+Použitím [npm](https://www.npmjs.com/):
+
+```sh
+npm install -g @graphprotocol/graph-cli@latest
+```
+
+Use the following command to create a Subgraph in Studio using the CLI:
+
+```sh
+graph init --product subgraph-studio
+```
+
+### Authenticate Your Subgraph
+
+In The Graph CLI, use the auth command seen in Subgraph Studio:
+
+```sh
+graph auth
+```
+
+## 2. Deploy Your Subgraph to Studio
+
+If you have your source code, you can easily deploy it to Studio. If you don't have it, here's a quick way to deploy your Subgraph.
+
+In The Graph CLI, run the following command:
+
+```sh
+graph deploy --ipfs-hash
+
+```
+
+> **Note:** Every Subgraph has an IPFS hash (Deployment ID), which looks like this: "Qmasdfad...". To deploy simply use this **IPFS hash**. You’ll be prompted to enter a version (e.g., v0.0.1).
+
+## 3. Publish Your Subgraph to The Graph Network
+
+
+
+### Query Your Subgraph
+
+> To attract about 3 indexers to query your Subgraph, it’s recommended to curate at least 3,000 GRT. To learn more about curating, check out [Curating](/resources/roles/curating/) on The Graph.
+
+You can start [querying](/subgraphs/querying/introduction/) any Subgraph by sending a GraphQL query into the Subgraph’s query URL endpoint, which is located at the top of its Explorer page in Subgraph Studio.
+
+#### Příklad
+
+[CryptoPunks Ethereum Subgraph](https://thegraph.com/explorer/subgraphs/HdVdERFUe8h61vm2fDyycHgxjsde5PbB832NHgJfZNqK) by Messari:
+
+
+
+The query URL for this Subgraph is:
+
+```sh
+https://gateway-arbitrum.network.thegraph.com/api/`**your-own-api-key**`/subgraphs/id/HdVdERFUe8h61vm2fDyycgxjsde5PbB832NHgJfZNqK
+```
+
+Now, you simply need to fill in **your own API Key** to start sending GraphQL queries to this endpoint.
+
+### Getting your own API Key
+
+You can create API Keys in Subgraph Studio under the “API Keys” menu at the top of the page:
+
+
+
+### Monitor Subgraph Status
+
+Once you upgrade, you can access and manage your Subgraphs in [Subgraph Studio](https://thegraph.com/studio/) and explore all Subgraphs in [The Graph Explorer](https://thegraph.com/networks/).
+
+### Další zdroje
+
+- To quickly create and publish a new Subgraph, check out the [Quick Start](/subgraphs/quick-start/).
+- To explore all the ways you can optimize and customize your Subgraph for a better performance, read more about [creating a Subgraph here](/developing/creating-a-subgraph/).
diff --git a/website/src/pages/cs/subgraphs/querying/best-practices.mdx b/website/src/pages/cs/subgraphs/querying/best-practices.mdx
index a28d505b9b46..038319488eda 100644
--- a/website/src/pages/cs/subgraphs/querying/best-practices.mdx
+++ b/website/src/pages/cs/subgraphs/querying/best-practices.mdx
@@ -4,7 +4,7 @@ title: Osvědčené postupy dotazování
The Graph provides a decentralized way to query data from blockchains. Its data is exposed through a GraphQL API, making it easier to query with the GraphQL language.
-Learn the essential GraphQL language rules and best practices to optimize your subgraph.
+Learn the essential GraphQL language rules and best practices to optimize your Subgraph.
---
@@ -71,7 +71,7 @@ It means that you can query a GraphQL API using standard `fetch` (natively or vi
However, as mentioned in ["Querying from an Application"](/subgraphs/querying/from-an-application/), it's recommended to use `graph-client`, which supports the following unique features:
-- Manipulace s podgrafy napříč řetězci: Dotazování z více podgrafů v jednom dotazu
+- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query
- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- Plně zadaný výsledekv
@@ -219,7 +219,7 @@ If the application only needs 10 transactions, the query should explicitly set `
### Use a single query to request multiple records
-By default, subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}`
+By default, Subgraphs have a singular entity for one record. For multiple records, use the plural entities and filter: `where: {id_in:[X,Y,Z]}` or `where: {volume_gt:100000}`
Example of inefficient querying:
diff --git a/website/src/pages/cs/subgraphs/querying/from-an-application.mdx b/website/src/pages/cs/subgraphs/querying/from-an-application.mdx
index b5e719983167..ef667e6b74c2 100644
--- a/website/src/pages/cs/subgraphs/querying/from-an-application.mdx
+++ b/website/src/pages/cs/subgraphs/querying/from-an-application.mdx
@@ -1,5 +1,6 @@
---
title: Dotazování z aplikace
+sidebarTitle: Querying from an App
---
Learn how to query The Graph from your application.
@@ -10,7 +11,7 @@ During the development process, you will receive a GraphQL API endpoint at two d
### Subgraph Studio Endpoint
-After deploying your subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this:
+After deploying your Subgraph to [Subgraph Studio](https://thegraph.com/docs/en/subgraphs/developing/deploying/using-subgraph-studio/), you will receive an endpoint that looks like this:
```
https://api.studio.thegraph.com/query///
@@ -20,13 +21,13 @@ https://api.studio.thegraph.com/query///
### The Graph Network Endpoint
-After publishing your subgraph to the network, you will receive an endpoint that looks like this: :
+After publishing your Subgraph to the network, you will receive an endpoint that looks like this: :
```
https://gateway.thegraph.com/api//subgraphs/id/
```
-> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the subgraph and populate your application with indexed data.
+> This endpoint is intended for active use on the network. It allows you to use various GraphQL client libraries to query the Subgraph and populate your application with indexed data.
## Using Popular GraphQL Clients
@@ -34,7 +35,7 @@ https://gateway.thegraph.com/api//subgraphs/id/
The Graph is providing its own GraphQL client, `graph-client` that supports unique features such as:
-- Manipulace s podgrafy napříč řetězci: Dotazování z více podgrafů v jednom dotazu
+- Cross-chain Subgraph Handling: Querying from multiple Subgraphs in a single query
- [Automatic Block Tracking](https://github.com/graphprotocol/graph-client/blob/main/packages/block-tracking/README.md)
- [Automatic Pagination](https://github.com/graphprotocol/graph-client/blob/main/packages/auto-pagination/README.md)
- Plně zadaný výsledekv
@@ -43,7 +44,7 @@ The Graph is providing its own GraphQL client, `graph-client` that supports uniq
### Fetch Data with Graph Client
-Let's look at how to fetch data from a subgraph with `graph-client`:
+Let's look at how to fetch data from a Subgraph with `graph-client`:
#### Krok 1
@@ -168,7 +169,7 @@ Although it's the heaviest client, it has many features to build advanced UI on
### Fetch Data with Apollo Client
-Let's look at how to fetch data from a subgraph with Apollo client:
+Let's look at how to fetch data from a Subgraph with Apollo client:
#### Krok 1
@@ -257,7 +258,7 @@ client
### Fetch data with URQL
-Let's look at how to fetch data from a subgraph with URQL:
+Let's look at how to fetch data from a Subgraph with URQL:
#### Krok 1
diff --git a/website/src/pages/cs/subgraphs/querying/graph-client/README.md b/website/src/pages/cs/subgraphs/querying/graph-client/README.md
index 416cadc13c6f..5dc2cfc408de 100644
--- a/website/src/pages/cs/subgraphs/querying/graph-client/README.md
+++ b/website/src/pages/cs/subgraphs/querying/graph-client/README.md
@@ -16,23 +16,23 @@ This library is intended to simplify the network aspect of data consumption for
| Status | Feature | Notes |
| :----: | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
-| ✅ | Multiple indexers | based on fetch strategies |
-| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue |
-| ✅ | Build time validations & optimizations | |
-| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) |
-| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source |
-| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client |
-| ✅ | Local (client-side) Mutations | |
-| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) |
-| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit |
-| ✅ | Integration with `@apollo/client` | |
-| ✅ | Integration with `urql` | |
-| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` |
-| ✅ | [`@live` queries](./live.md) | Based on polling |
+| ✅ | Multiple indexers | based on fetch strategies |
+| ✅ | Fetch Strategies | timeout, retry, fallback, race, highestValue |
+| ✅ | Build time validations & optimizations | |
+| ✅ | Client-Side Composition | with improved execution planner (based on GraphQL-Mesh) |
+| ✅ | Cross-chain Subgraph Handling | Use similar subgraphs as a single source |
+| ✅ | Raw Execution (standalone mode) | without a wrapping GraphQL client |
+| ✅ | Local (client-side) Mutations | |
+| ✅ | [Automatic Block Tracking](../packages/block-tracking/README.md) | tracking block numbers [as described here](https://thegraph.com/docs/en/developer/distributed-systems/#polling-for-updated-data) |
+| ✅ | [Automatic Pagination](../packages/auto-pagination/README.md) | doing multiple requests in a single call to fetch more than the indexer limit |
+| ✅ | Integration with `@apollo/client` | |
+| ✅ | Integration with `urql` | |
+| ✅ | TypeScript support | with built-in GraphQL Codegen and `TypedDocumentNode` |
+| ✅ | [`@live` queries](./live.md) | Based on polling |
> You can find an [extended architecture design here](./architecture.md)
-## Getting Started
+## Začínáme
You can follow [Episode 45 of `graphql.wtf`](https://graphql.wtf/episodes/45-the-graph-client) to learn more about Graph Client:
@@ -138,7 +138,7 @@ graphclient serve-dev
And open http://localhost:4000/ to use GraphiQL. You can now experiment with your Graph client-side GraphQL schema locally! 🥳
-#### Examples
+#### Příklady
You can also refer to [examples directory in this repo](../examples), for more advanced examples and integration examples:
@@ -308,8 +308,8 @@ sources:
`highestValue`
-
- This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated.
+
+This strategy allows you to send parallel requests to different endpoints for the same source and choose the most updated.
This is useful if you want to choose most synced data for the same Subgraph over different indexers/sources.
diff --git a/website/src/pages/cs/subgraphs/querying/graph-client/live.md b/website/src/pages/cs/subgraphs/querying/graph-client/live.md
index e6f726cb4352..0e3b535bd5d6 100644
--- a/website/src/pages/cs/subgraphs/querying/graph-client/live.md
+++ b/website/src/pages/cs/subgraphs/querying/graph-client/live.md
@@ -2,7 +2,7 @@
Graph-Client implements a custom `@live` directive that can make every GraphQL query work with real-time data.
-## Getting Started
+## Začínáme
Start by adding the following configuration to your `.graphclientrc.yml` file:
diff --git a/website/src/pages/cs/subgraphs/querying/graphql-api.mdx b/website/src/pages/cs/subgraphs/querying/graphql-api.mdx
index f0cc9b78b338..1a5e672ccbd5 100644
--- a/website/src/pages/cs/subgraphs/querying/graphql-api.mdx
+++ b/website/src/pages/cs/subgraphs/querying/graphql-api.mdx
@@ -6,13 +6,13 @@ Learn about the GraphQL Query API used in The Graph.
## What is GraphQL?
-[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query subgraphs.
+[GraphQL](https://graphql.org/learn/) is a query language for APIs and a runtime for executing those queries with your existing data. The Graph uses GraphQL to query Subgraphs.
-To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a subgraph](/developing/creating-a-subgraph/).
+To understand the larger role that GraphQL plays, review [developing](/subgraphs/developing/introduction/) and [creating a Subgraph](/developing/creating-a-subgraph/).
## Queries with GraphQL
-In your subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type.
+In your Subgraph schema you define types called `Entities`. For each `Entity` type, `entity` and `entities` fields will be generated on the top-level `Query` type.
> Note: `query` does not need to be included at the top of the `graphql` query when using The Graph.
@@ -170,7 +170,7 @@ You can use suffixes like `_gt`, `_lte` for value comparison:
You can also filter entities that were updated in or after a specified block with `_change_block(number_gte: Int)`.
-To může být užitečné, pokud chcete načíst pouze entity, které se změnily například od posledního dotazování. Nebo může být užitečná pro zkoumání nebo ladění změn entit v podgrafu (v kombinaci s blokovým filtrem můžete izolovat pouze entity, které se změnily v určitém bloku).
+This can be useful if you are looking to fetch only entities which have changed, for example since the last time you polled. Or alternatively it can be useful to investigate or debug how entities are changing in your Subgraph (if combined with a block filter, you can isolate only entities that changed in a specific block).
```graphql
{
@@ -329,18 +329,18 @@ This query will return `Challenge` entities, and their associated `Application`
### Fulltextové Vyhledávání dotazy
-Fulltext search query fields provide an expressive text search API that can be added to the subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your subgraph.
+Fulltext search query fields provide an expressive text search API that can be added to the Subgraph schema and customized. Refer to [Defining Fulltext Search Fields](/developing/creating-a-subgraph/#defining-fulltext-search-fields) to add fulltext search to your Subgraph.
Fulltext search queries have one required field, `text`, for supplying search terms. Several special fulltext operators are available to be used in this `text` search field.
Operátory fulltextového vyhledávání:
-| Symbol | Operátor | Popis |
-| --- | --- | --- |
-| `&` | `And` | Pro kombinaci více vyhledávacích výrazů do filtru pro entity, které obsahují všechny zadané výrazy |
-| | | `Or` | Dotazy s více hledanými výrazy oddělenými operátorem nebo vrátí všechny entity, které odpovídají některému z uvedených výrazů |
-| `<->` | `Follow by` | Zadejte vzdálenost mezi dvěma slovy. |
-| `:*` | `Prefix` | Pomocí předponového výrazu vyhledejte slova, jejichž předpona se shoduje (vyžadovány 2 znaky) |
+| Symbol | Operátor | Popis |
+| ------ | ----------- | ----------------------------------------------------------------------------------------------------------------------------- |
+| `&` | `And` | Pro kombinaci více vyhledávacích výrazů do filtru pro entity, které obsahují všechny zadané výrazy |
+| | | `Or` | Dotazy s více hledanými výrazy oddělenými operátorem nebo vrátí všechny entity, které odpovídají některému z uvedených výrazů |
+| `<->` | `Follow by` | Zadejte vzdálenost mezi dvěma slovy. |
+| `:*` | `Prefix` | Pomocí předponového výrazu vyhledejte slova, jejichž předpona se shoduje (vyžadovány 2 znaky) |
#### Příklady
@@ -391,7 +391,7 @@ Graph Node implements [specification-based](https://spec.graphql.org/October2021
The schema of your dataSources, i.e. the entity types, values, and relationships that are available to query, are defined through the [GraphQL Interface Definition Language (IDL)](https://facebook.github.io/graphql/draft/#sec-Type-System).
-GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your subgraph is automatically generated from the GraphQL schema that's included in your [subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph).
+GraphQL schemas generally define root types for `queries`, `subscriptions` and `mutations`. The Graph only supports `queries`. The root `Query` type for your Subgraph is automatically generated from the GraphQL schema that's included in your [Subgraph manifest](/developing/creating-a-subgraph/#components-of-a-subgraph).
> Note: Our API does not expose mutations because developers are expected to issue transactions directly against the underlying blockchain from their applications.
@@ -403,7 +403,7 @@ All GraphQL types with `@entity` directives in your schema will be treated as en
### Metadata podgrafů
-All subgraphs have an auto-generated `_Meta_` object, which provides access to subgraph metadata. This can be queried as follows:
+All Subgraphs have an auto-generated `_Meta_` object, which provides access to Subgraph metadata. This can be queried as follows:
```graphQL
{
@@ -419,7 +419,7 @@ All subgraphs have an auto-generated `_Meta_` object, which provides access to s
}
```
-Pokud je uveden blok, metadata se vztahují k tomuto bloku, pokud ne, použije se poslední indexovaný blok. Pokud je blok uveden, musí se nacházet za počátečním blokem podgrafu a musí být menší nebo roven poslednímu Indevovaný bloku.
+If a block is provided, the metadata is as of that block, if not the latest indexed block is used. If provided, the block must be after the Subgraph's start block, and less than or equal to the most recently indexed block.
`deployment` is a unique ID, corresponding to the IPFS CID of the `subgraph.yaml` file.
@@ -427,6 +427,6 @@ Pokud je uveden blok, metadata se vztahují k tomuto bloku, pokud ne, použije s
- hash: hash bloku
- číslo: číslo bloku
-- timestamp: časové razítko bloku, pokud je k dispozici (v současné době je k dispozici pouze pro podgrafy indexující sítě EVM)
+- timestamp: the timestamp of the block, if available (this is currently only available for Subgraphs indexing EVM networks)
-`hasIndexingErrors` is a boolean identifying whether the subgraph encountered indexing errors at some past block
+`hasIndexingErrors` is a boolean identifying whether the Subgraph encountered indexing errors at some past block
diff --git a/website/src/pages/cs/subgraphs/querying/introduction.mdx b/website/src/pages/cs/subgraphs/querying/introduction.mdx
index 19ecde83f4a8..6169df767051 100644
--- a/website/src/pages/cs/subgraphs/querying/introduction.mdx
+++ b/website/src/pages/cs/subgraphs/querying/introduction.mdx
@@ -7,11 +7,11 @@ To start querying right away, visit [The Graph Explorer](https://thegraph.com/ex
## Přehled
-When a subgraph is published to The Graph Network, you can visit its subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each subgraph.
+When a Subgraph is published to The Graph Network, you can visit its Subgraph details page on Graph Explorer and use the "Query" tab to explore the deployed GraphQL API for each Subgraph.
## Specifics
-Each subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the subgraph details page and clicking on the "Query" button in the top right corner.
+Each Subgraph published to The Graph Network has a unique query URL in Graph Explorer to make direct queries. You can find it by navigating to the Subgraph details page and clicking on the "Query" button in the top right corner.

@@ -21,7 +21,7 @@ You will notice that this query URL must use a unique API key. You can create an
Subgraph Studio users start on a Free Plan, which allows them to make 100,000 queries per month. Additional queries are available on the Growth Plan, which offers usage based pricing for additional queries, payable by credit card, or GRT on Arbitrum. You can learn more about billing [here](/subgraphs/billing/).
-> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the subgraph's entities.
+> Please see the [Query API](/subgraphs/querying/graphql-api/) for a complete reference on how to query the Subgraph's entities.
>
> Note: If you encounter 405 errors with a GET request to the Graph Explorer URL, please switch to a POST request instead.
diff --git a/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx b/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx
index 0f5721e5cbcb..f2954c5593c0 100644
--- a/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx
+++ b/website/src/pages/cs/subgraphs/querying/managing-api-keys.mdx
@@ -1,14 +1,14 @@
---
-title: Správa klíčů API
+title: Managing API keys
---
## Přehled
-API keys are needed to query subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application.
+API keys are needed to query Subgraphs. They ensure that the connections between application services are valid and authorized, including authenticating the end user and the device using the application.
### Create and Manage API Keys
-Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific subgraphs.
+Go to [Subgraph Studio](https://thegraph.com/studio/) and click the **API Keys** tab to create and manage your API keys for specific Subgraphs.
The "API keys" table lists existing API keys and allows you to manage or delete them. For each key, you can see its status, the cost for the current period, the spending limit for the current period, and the total number of queries.
@@ -31,4 +31,4 @@ You can click on an individual API key to view the Details page:
- Výše vynaložených GRT
2. Under the **Security** section, you can opt into security settings depending on the level of control you’d like to have. Specifically, you can:
- Zobrazení a správa názvů domén oprávněných používat váš klíč API
- - Přiřazení podgrafů, na které se lze dotazovat pomocí klíče API
+ - Assign Subgraphs that can be queried with your API key
diff --git a/website/src/pages/cs/subgraphs/querying/python.mdx b/website/src/pages/cs/subgraphs/querying/python.mdx
index 669e95c19183..51e3b966a2b5 100644
--- a/website/src/pages/cs/subgraphs/querying/python.mdx
+++ b/website/src/pages/cs/subgraphs/querying/python.mdx
@@ -3,7 +3,7 @@ title: Query The Graph with Python and Subgrounds
sidebarTitle: Python (Subgrounds)
---
-Subgrounds je intuitivní knihovna Pythonu pro dotazování na podgrafy, vytvořená [Playgrounds](https://playgrounds.network/). Umožňuje přímo připojit data subgrafů k datovému prostředí Pythonu, což vám umožní používat knihovny jako [pandas](https://pandas.pydata.org/) k provádění analýzy dat!
+Subgrounds is an intuitive Python library for querying Subgraphs, built by [Playgrounds](https://playgrounds.network/). It allows you to directly connect Subgraph data to a Python data environment, letting you use libraries like [pandas](https://pandas.pydata.org/) to perform data analysis!
Subgrounds nabízí jednoduché Pythonic API pro vytváření dotazů GraphQL, automatizuje zdlouhavé pracovní postupy, jako je stránkování, a umožňuje pokročilým uživatelům řízené transformace schémat.
@@ -17,24 +17,24 @@ pip install --upgrade subgrounds
python -m pip install --upgrade subgrounds
```
-Po instalaci můžete vyzkoušet podklady pomocí následujícího dotazu. Následující příklad uchopí podgraf pro protokol Aave v2 a dotazuje se na 5 největších trhů seřazených podle TVL (Total Value Locked), vybere jejich název a jejich TVL (v USD) a vrátí data jako pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame).
+Once installed, you can test out subgrounds with the following query. The following example grabs a Subgraph for the Aave v2 protocol and queries the top 5 markets ordered by TVL (Total Value Locked), selects their name and their TVL (in USD) and returns the data as a pandas [DataFrame](https://pandas.pydata.org/pandas-docs/dev/reference/api/pandas.DataFrame.html#pandas.DataFrame).
```python
from subgrounds import Subgrounds
sg = Subgrounds()
-# Načtení podgrafu
+# Load the Subgraph
aave_v2 = sg.load_subgraph(
"https://api.thegraph.com/subgraphs/name/messari/aave-v2-ethereum")
-# Sestavte dotaz
+# Construct the query
latest_markets = aave_v2.Query.markets(
orderBy=aave_v2.Market.totalValueLockedUSD,
- orderDirection="desc",
+ orderDirection='desc',
first=5,
)
-# Vrátit dotaz do datového rámce
+# Return query to a dataframe
sg.query_df([
latest_markets.name,
latest_markets.totalValueLockedUSD,
diff --git a/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx b/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
index 7bef9e129e33..7792cb56d855 100644
--- a/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
+++ b/website/src/pages/cs/subgraphs/querying/subgraph-id-vs-deployment-id.mdx
@@ -2,17 +2,17 @@
title: ID podgrafu vs. ID nasazení
---
-Podgraf je identifikován ID podgrafu a každá verze podgrafu je identifikována ID nasazení.
+A Subgraph is identified by a Subgraph ID, and each version of the Subgraph is identified by a Deployment ID.
-When querying a subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a subgraph.
+When querying a Subgraph, either ID can be used, though it is generally suggested that the Deployment ID is used due to its ability to specify a specific version of a Subgraph.
Here are some key differences between the two IDs: 
## ID nasazení
-The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
+The Deployment ID is the IPFS hash of the compiled manifest file, which refers to other files on IPFS instead of relative URLs on the computer. For example, the compiled manifest can be accessed via: `https://api.thegraph.com/ipfs/api/v0/cat?arg=QmQKXcNQQRdUvNRMGJiE2idoTu9fo5F5MRtKztH4WyKxED`. To change the Deployment ID, one can simply update the manifest file, such as modifying the description field as described in the [Subgraph manifest documentation](https://github.com/graphprotocol/graph-node/blob/master/docs/subgraph-manifest.md#13-top-level-api).
-When queries are made using a subgraph's Deployment ID, we are specifying a version of that subgraph to query. Using the Deployment ID to query a specific subgraph version results in a more sophisticated and robust setup as there is full control over the subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the subgraph is published.
+When queries are made using a Subgraph's Deployment ID, we are specifying a version of that Subgraph to query. Using the Deployment ID to query a specific Subgraph version results in a more sophisticated and robust setup as there is full control over the Subgraph version being queried. However, this results in the need of updating the query code manually every time a new version of the Subgraph is published.
Příklad koncového bodu, který používá ID nasazení:
@@ -20,8 +20,8 @@ Příklad koncového bodu, který používá ID nasazení:
## ID podgrafu
-The Subgraph ID is a unique identifier for a subgraph. It remains constant across all versions of a subgraph. It is recommended to use the Subgraph ID to query the latest version of a subgraph, although there are some caveats.
+The Subgraph ID is a unique identifier for a Subgraph. It remains constant across all versions of a Subgraph. It is recommended to use the Subgraph ID to query the latest version of a Subgraph, although there are some caveats.
-Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes.
+Be aware that querying using Subgraph ID may result in queries being responded to by an older version of the Subgraph due to the new version needing time to sync. Also, new versions could introduce breaking schema changes.
Example endpoint that uses Subgraph ID: `https://gateway-arbitrum.network.thegraph.com/api/[api-key]/subgraphs/id/FL3ePDCBbShPvfRJTaSCNnehiqxsPHzpLud6CpbHoeKW`
diff --git a/website/src/pages/cs/subgraphs/quick-start.mdx b/website/src/pages/cs/subgraphs/quick-start.mdx
index 130f699763ce..7c52d4745a83 100644
--- a/website/src/pages/cs/subgraphs/quick-start.mdx
+++ b/website/src/pages/cs/subgraphs/quick-start.mdx
@@ -2,7 +2,7 @@
title: Rychlé spuštění
---
-Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph.
+Learn how to easily build, publish and query a [Subgraph](/subgraphs/developing/developer-faq/#1-what-is-a-subgraph) on The Graph.
## Prerequisites
@@ -13,13 +13,13 @@ Learn how to easily build, publish and query a [subgraph](/subgraphs/developing/
## How to Build a Subgraph
-### 1. Create a subgraph in Subgraph Studio
+### 1. Create a Subgraph in Subgraph Studio
Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your wallet.
-Subgraph Studio lets you create, manage, deploy, and publish subgraphs, as well as create and manage API keys.
+Subgraph Studio lets you create, manage, deploy, and publish Subgraphs, as well as create and manage API keys.
-Click "Create a Subgraph". It is recommended to name the subgraph in Title Case: "Subgraph Name Chain Name".
+Click "Create a Subgraph". It is recommended to name the Subgraph in Title Case: "Subgraph Name Chain Name".
### 2. Nainstalujte Graph CLI
@@ -37,13 +37,13 @@ Použitím [yarn](https://yarnpkg.com/):
yarn global add @graphprotocol/graph-cli
```
-### 3. Initialize your subgraph
+### 3. Initialize your Subgraph
-> Příkazy pro konkrétní podgraf najdete na stránce podgrafu v [Subgraph Studio](https://thegraph.com/studio/).
+> You can find commands for your specific Subgraph on the Subgraph page in [Subgraph Studio](https://thegraph.com/studio/).
-The `graph init` command will automatically create a scaffold of a subgraph based on your contract's events.
+The `graph init` command will automatically create a scaffold of a Subgraph based on your contract's events.
-The following command initializes your subgraph from an existing contract:
+The following command initializes your Subgraph from an existing contract:
```sh
graph init
@@ -51,42 +51,42 @@ graph init
If your contract is verified on the respective blockscanner where it is deployed (such as [Etherscan](https://etherscan.io/)), then the ABI will automatically be created in the CLI.
-When you initialize your subgraph, the CLI will ask you for the following information:
+When you initialize your Subgraph, the CLI will ask you for the following information:
-- **Protocol**: Choose the protocol your subgraph will be indexing data from.
-- **Subgraph slug**: Create a name for your subgraph. Your subgraph slug is an identifier for your subgraph.
-- **Directory**: Choose a directory to create your subgraph in.
-- **Ethereum network** (optional): You may need to specify which EVM-compatible network your subgraph will be indexing data from.
+- **Protocol**: Choose the protocol your Subgraph will be indexing data from.
+- **Subgraph slug**: Create a name for your Subgraph. Your Subgraph slug is an identifier for your Subgraph.
+- **Directory**: Choose a directory to create your Subgraph in.
+- **Ethereum network** (optional): You may need to specify which EVM-compatible network your Subgraph will be indexing data from.
- **Contract address**: Locate the smart contract address you’d like to query data from.
- **ABI**: If the ABI is not auto-populated, you will need to input it manually as a JSON file.
-- **Start Block**: You should input the start block to optimize subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed.
+- **Start Block**: You should input the start block to optimize Subgraph indexing of blockchain data. Locate the start block by finding the block where your contract was deployed.
- **Contract Name**: Input the name of your contract.
-- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your subgraph for every emitted event.
+- **Index contract events as entities**: It is suggested that you set this to true, as it will automatically add mappings to your Subgraph for every emitted event.
- **Add another contract** (optional): You can add another contract.
-Na následujícím snímku najdete příklad toho, co můžete očekávat při inicializaci podgrafu:
+See the following screenshot for an example for what to expect when initializing your Subgraph:

-### 4. Edit your subgraph
+### 4. Edit your Subgraph
-The `init` command in the previous step creates a scaffold subgraph that you can use as a starting point to build your subgraph.
+The `init` command in the previous step creates a scaffold Subgraph that you can use as a starting point to build your Subgraph.
-When making changes to the subgraph, you will mainly work with three files:
+When making changes to the Subgraph, you will mainly work with three files:
-- Manifest (`subgraph.yaml`) - defines what data sources your subgraph will index.
-- Schema (`schema.graphql`) - defines what data you wish to retrieve from the subgraph.
+- Manifest (`subgraph.yaml`) - defines what data sources your Subgraph will index.
+- Schema (`schema.graphql`) - defines what data you wish to retrieve from the Subgraph.
- AssemblyScript Mappings (`mapping.ts`) - translates data from your data sources to the entities defined in the schema.
-For a detailed breakdown on how to write your subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/).
+For a detailed breakdown on how to write your Subgraph, check out [Creating a Subgraph](/developing/creating-a-subgraph/).
-### 5. Deploy your subgraph
+### 5. Deploy your Subgraph
> Remember, deploying is not the same as publishing.
-When you **deploy** a subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
+When you **deploy** a Subgraph, you push it to [Subgraph Studio](https://thegraph.com/studio/), where you can test, stage and review it. A deployed Subgraph's indexing is performed by the [Upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/), which is a single Indexer owned and operated by Edge & Node, rather than by the many decentralized Indexers in The Graph Network. A **deployed** Subgraph is free to use, rate-limited, not visible to the public, and meant to be used for development, staging, and testing purposes.
-Jakmile je podgraf napsán, spusťte následující příkazy:
+Once your Subgraph is written, run the following commands:
````
```sh
@@ -94,7 +94,7 @@ graph codegen && graph build
```
````
-Authenticate and deploy your subgraph. The deploy key can be found on the subgraph's page in Subgraph Studio.
+Authenticate and deploy your Subgraph. The deploy key can be found on the Subgraph's page in Subgraph Studio.

@@ -109,37 +109,37 @@ graph deploy
The CLI will ask for a version label. It's strongly recommended to use [semantic versioning](https://semver.org/), e.g. `0.0.1`.
-### 6. Review your subgraph
+### 6. Review your Subgraph
-If you’d like to test your subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following:
+If you’d like to test your Subgraph before publishing it, you can use [Subgraph Studio](https://thegraph.com/studio/) to do the following:
- Run a sample query.
-- Analyze your subgraph in the dashboard to check information.
-- Check the logs on the dashboard to see if there are any errors with your subgraph. The logs of an operational subgraph will look like this:
+- Analyze your Subgraph in the dashboard to check information.
+- Check the logs on the dashboard to see if there are any errors with your Subgraph. The logs of an operational Subgraph will look like this:

-### 7. Publish your subgraph to The Graph Network
+### 7. Publish your Subgraph to The Graph Network
-When your subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following:
+When your Subgraph is ready for a production environment, you can publish it to the decentralized network. Publishing is an onchain action that does the following:
-- It makes your subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network.
-- It removes rate limits and makes your subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/).
-- It makes your subgraph available for [Curators](/resources/roles/curating/) to curate it.
+- It makes your Subgraph available to be to indexed by the decentralized [Indexers](/indexing/overview/) on The Graph Network.
+- It removes rate limits and makes your Subgraph publicly searchable and queryable in [Graph Explorer](https://thegraph.com/explorer/).
+- It makes your Subgraph available for [Curators](/resources/roles/curating/) to curate it.
-> The greater the amount of GRT you and others curate on your subgraph, the more Indexers will be incentivized to index your subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your subgraph.
+> The greater the amount of GRT you and others curate on your Subgraph, the more Indexers will be incentivized to index your Subgraph, improving the quality of service, reducing latency, and enhancing network redundancy for your Subgraph.
#### Publishing with Subgraph Studio
-To publish your subgraph, click the Publish button in the dashboard.
+To publish your Subgraph, click the Publish button in the dashboard.
-
+
-Select the network to which you would like to publish your subgraph.
+Select the network to which you would like to publish your Subgraph.
#### Publishing from the CLI
-As of version 0.73.0, you can also publish your subgraph with the Graph CLI.
+As of version 0.73.0, you can also publish your Subgraph with the Graph CLI.
Open the `graph-cli`.
@@ -157,32 +157,32 @@ graph publish
```
````
-3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice.
+3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized Subgraph to a network of your choice.

To customize your deployment, see [Publishing a Subgraph](/subgraphs/developing/publishing/publishing-a-subgraph/).
-#### Přidání signálu do podgrafu
+#### Adding signal to your Subgraph
-1. To attract Indexers to query your subgraph, you should add GRT curation signal to it.
+1. To attract Indexers to query your Subgraph, you should add GRT curation signal to it.
- - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your subgraph.
+ - This action improves quality of service, reduces latency, and enhances network redundancy and availability for your Subgraph.
2. If eligible for indexing rewards, Indexers receive GRT rewards based on the signaled amount.
- - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on subgraph feature usage and supported networks.
+ - It’s recommended to curate at least 3,000 GRT to attract 3 Indexers. Check reward eligibility based on Subgraph feature usage and supported networks.
To learn more about curation, read [Curating](/resources/roles/curating/).
-To save on gas costs, you can curate your subgraph in the same transaction you publish it by selecting this option:
+To save on gas costs, you can curate your Subgraph in the same transaction you publish it by selecting this option:

-### 8. Query your subgraph
+### 8. Query your Subgraph
-You now have access to 100,000 free queries per month with your subgraph on The Graph Network!
+You now have access to 100,000 free queries per month with your Subgraph on The Graph Network!
-You can query your subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button.
+You can query your Subgraph by sending GraphQL queries to its Query URL, which you can find by clicking the Query button.
-For more information about querying data from your subgraph, read [Querying The Graph](/subgraphs/querying/introduction/).
+For more information about querying data from your Subgraph, read [Querying The Graph](/subgraphs/querying/introduction/).
diff --git a/website/src/pages/cs/substreams/developing/dev-container.mdx b/website/src/pages/cs/substreams/developing/dev-container.mdx
index bd4acf16eec7..339ddb159c87 100644
--- a/website/src/pages/cs/substreams/developing/dev-container.mdx
+++ b/website/src/pages/cs/substreams/developing/dev-container.mdx
@@ -9,7 +9,7 @@ Develop your first project with Substreams Dev Container.
It's a tool to help you build your first project. You can either run it remotely through Github codespaces or locally by cloning the [substreams starter repository](https://github.com/streamingfast/substreams-starter?tab=readme-ov-file).
-Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a subgraph or an SQL-based solution for data handling.
+Inside the Dev Container, the `substreams init` command sets up a code-generated Substreams project, allowing you to easily build a Subgraph or an SQL-based solution for data handling.
## Prerequisites
@@ -35,7 +35,7 @@ To share your work with the broader community, publish your `.spkg` to [Substrea
You can configure your project to query data either through a Subgraph or directly from an SQL database:
-- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph).
+- **Subgraph**: Run `substreams codegen subgraph`. This generates a project with a basic `schema.graphql` and `mappings.ts` file. You can customize these to define entities based on the data extracted by Substreams. For more configurations, see [Subgraph sink documentation](https://docs.substreams.dev/how-to-guides/sinks/subgraph).
- **SQL**: Run `substreams codegen sql` for SQL-based queries. For more information on configuring a SQL sink, refer to the [SQL documentation](https://docs.substreams.dev/how-to-guides/sinks/sql-sink).
## Deployment Options
diff --git a/website/src/pages/cs/substreams/developing/sinks.mdx b/website/src/pages/cs/substreams/developing/sinks.mdx
index f87e46464532..821ded42c0d0 100644
--- a/website/src/pages/cs/substreams/developing/sinks.mdx
+++ b/website/src/pages/cs/substreams/developing/sinks.mdx
@@ -1,5 +1,5 @@
---
-title: Official Sinks
+title: Sink your Substreams
---
Choose a sink that meets your project's needs.
@@ -8,7 +8,7 @@ Choose a sink that meets your project's needs.
Once you find a package that fits your needs, you can choose how you want to consume the data.
-Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a subgraph.
+Sinks are integrations that allow you to send the extracted data to different destinations, such as a SQL database, a file or a Subgraph.
## Sinks
@@ -26,26 +26,26 @@ Sinks are integrations that allow you to send the extracted data to different de
### Official
-| Name | Support | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
-| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
-| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
-| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
-| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
-| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
-| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
-| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
+| Name | Support | Maintainer | Source Code |
+| ---------- | ------- | ------------- | ----------------------------------------------------------------------------------------- |
+| SQL | O | StreamingFast | [substreams-sink-sql](https://github.com/streamingfast/substreams-sink-sql) |
+| Go SDK | O | StreamingFast | [substreams-sink](https://github.com/streamingfast/substreams-sink) |
+| Rust SDK | O | StreamingFast | [substreams-sink-rust](https://github.com/streamingfast/substreams-sink-rust) |
+| JS SDK | O | StreamingFast | [substreams-js](https://github.com/substreams-js/substreams-js) |
+| KV Store | O | StreamingFast | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | O | Pinax | [substreams-sink-prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Webhook | O | Pinax | [substreams-sink-webhook](https://github.com/pinax-network/substreams-sink-webhook) |
+| CSV | O | Pinax | [substreams-sink-csv](https://github.com/pinax-network/substreams-sink-csv) |
+| PubSub | O | StreamingFast | [substreams-sink-pubsub](https://github.com/streamingfast/substreams-sink-pubsub) |
### Community
-| Name | Support | Maintainer | Source Code |
-| --- | --- | --- | --- |
-| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
-| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
-| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
-| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
+| Name | Support | Maintainer | Source Code |
+| ---------- | ------- | ---------- | ----------------------------------------------------------------------------------------- |
+| MongoDB | C | Community | [substreams-sink-mongodb](https://github.com/streamingfast/substreams-sink-mongodb) |
+| Files | C | Community | [substreams-sink-files](https://github.com/streamingfast/substreams-sink-files) |
+| KV Store | C | Community | [substreams-sink-kv](https://github.com/streamingfast/substreams-sink-kv) |
+| Prometheus | C | Community | [substreams-sink-Prometheus](https://github.com/pinax-network/substreams-sink-prometheus) |
- O = Official Support (by one of the main Substreams providers)
- C = Community Support
diff --git a/website/src/pages/cs/substreams/developing/solana/account-changes.mdx b/website/src/pages/cs/substreams/developing/solana/account-changes.mdx
index 8c309bbcce31..98da6949aef4 100644
--- a/website/src/pages/cs/substreams/developing/solana/account-changes.mdx
+++ b/website/src/pages/cs/substreams/developing/solana/account-changes.mdx
@@ -11,7 +11,7 @@ This guide walks you through the process of setting up your environment, configu
> NOTE: History for the Solana Account Changes dates as of 2025, block 310629601.
-For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Referece](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes).
+For each Substreams Solana account block, only the latest update per account is recorded, see the [Protobuf Reference](https://buf.build/streamingfast/firehose-solana/file/main:sf/solana/type/v1/account.proto). If an account is deleted, a payload with `deleted == True` is provided. Additionally, events of low importance ware omitted, such as those with the special owner “Vote11111111…” account or changes that do not affect the account data (ex: lamport changes).
> NOTE: To test Substreams latency for Solana accounts, measured as block-head drift, install the [Substreams CLI](https://docs.substreams.dev/reference-material/substreams-cli/installing-the-cli) and running `substreams run solana-common blocks_without_votes -s -1 -o clock`.
diff --git a/website/src/pages/cs/substreams/developing/solana/transactions.mdx b/website/src/pages/cs/substreams/developing/solana/transactions.mdx
index a50984178cd8..a5415dcfd8e4 100644
--- a/website/src/pages/cs/substreams/developing/solana/transactions.mdx
+++ b/website/src/pages/cs/substreams/developing/solana/transactions.mdx
@@ -36,12 +36,12 @@ Within the generated directories, modify your Substreams modules to include addi
## Step 3: Load the Data
-To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered subgraph](/sps/introduction/) or SQL-DB sink.
+To make your Substreams queryable (as opposed to [direct streaming](https://docs.substreams.dev/how-to-guides/sinks/stream)), you can automatically generate a [Substreams-powered Subgraph](/sps/introduction/) or SQL-DB sink.
### Podgrafy
1. Run `substreams codegen subgraph` to initialize the sink, producing the necessary files and function definitions.
-2. Create your [subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`.
+2. Create your [Subgraph mappings](/sps/triggers/) within the `mappings.ts` and associated entities within the `schema.graphql`.
3. Build and deploy locally or to [Subgraph Studio](https://thegraph.com/studio-pricing/) by running `deploy-studio`.
### SQL
diff --git a/website/src/pages/cs/substreams/introduction.mdx b/website/src/pages/cs/substreams/introduction.mdx
index 57d215576f60..d68760ad1432 100644
--- a/website/src/pages/cs/substreams/introduction.mdx
+++ b/website/src/pages/cs/substreams/introduction.mdx
@@ -13,7 +13,7 @@ Substreams is a powerful parallel blockchain indexing technology designed to enh
## Substreams Benefits
-- **Accelerated Indexing**: Boost subgraph indexing time with a parallelized engine for quicker data retrieval and processing.
+- **Accelerated Indexing**: Boost Subgraph indexing time with a parallelized engine for quicker data retrieval and processing.
- **Multi-Chain Support**: Expand indexing capabilities beyond EVM-based chains, supporting ecosystems like Solana, Injective, Starknet, and Vara.
- **Enhanced Data Model**: Access comprehensive data, including the `trace` level data on EVM or account changes on Solana, while efficiently managing forks/disconnections.
- **Multi-Sink Support:** For Subgraph, Postgres database, Clickhouse, and Mongo database.
diff --git a/website/src/pages/cs/substreams/publishing.mdx b/website/src/pages/cs/substreams/publishing.mdx
index 8e71c65c2eed..19415c7860d8 100644
--- a/website/src/pages/cs/substreams/publishing.mdx
+++ b/website/src/pages/cs/substreams/publishing.mdx
@@ -9,7 +9,7 @@ Learn how to publish a Substreams package to the [Substreams Registry](https://s
### What is a package?
-A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional subgraphs.
+A Substreams package is a precompiled binary file that defines the specific data you want to extract from the blockchain, similar to the `mapping.ts` file in traditional Subgraphs.
## Publish a Package
@@ -44,7 +44,7 @@ A Substreams package is a precompiled binary file that defines the specific data

-That's it! You have succesfully published a package in the Substreams registry.
+That's it! You have successfully published a package in the Substreams registry.

diff --git a/website/src/pages/cs/supported-networks.mdx b/website/src/pages/cs/supported-networks.mdx
index 733c5de18c69..863814948ba7 100644
--- a/website/src/pages/cs/supported-networks.mdx
+++ b/website/src/pages/cs/supported-networks.mdx
@@ -1,22 +1,28 @@
---
title: Podporované sítě
hideTableOfContents: true
+hideContentHeader: true
---
-import { getStaticPropsForSupportedNetworks } from '@/buildGetStaticProps'
-import { SupportedNetworksTable } from '@/supportedNetworks'
+import { getSupportedNetworksStaticProps, SupportedNetworksTable } from '@/supportedNetworks'
+import { Heading } from '@/components'
+import { useI18n } from '@/i18n'
-export const getStaticProps = getStaticPropsForSupportedNetworks(__filename)
+export const getStaticProps = getSupportedNetworksStaticProps
+
+
+ {useI18n().t('index.supportedNetworks.title')}
+
- Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints.
- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier.
-- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks.
+- If a Subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks.
- For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md).
## Running Graph Node locally
If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration.
-Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support.
+Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR and Arweave. Additionally, Graph Node can support Substreams-powered Subgraphs for any network with Substreams support.
diff --git a/website/src/pages/cs/token-api/_meta-titles.json b/website/src/pages/cs/token-api/_meta-titles.json
new file mode 100644
index 000000000000..7ed31e0af95d
--- /dev/null
+++ b/website/src/pages/cs/token-api/_meta-titles.json
@@ -0,0 +1,6 @@
+{
+ "mcp": "MCP",
+ "evm": "EVM Endpoints",
+ "monitoring": "Monitoring Endpoints",
+ "faq": "FAQ"
+}
diff --git a/website/src/pages/cs/token-api/evm/get-balances-evm-by-address.mdx b/website/src/pages/cs/token-api/evm/get-balances-evm-by-address.mdx
new file mode 100644
index 000000000000..3386fd078059
--- /dev/null
+++ b/website/src/pages/cs/token-api/evm/get-balances-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: Token Balances by Wallet Address
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getBalancesEvmByAddress
+---
+
+The EVM Balances endpoint provides a snapshot of an account’s current token holdings. The endpoint returns the current balances of native and ERC-20 tokens held by a specified wallet address on an Ethereum-compatible blockchain.
diff --git a/website/src/pages/cs/token-api/evm/get-holders-evm-by-contract.mdx b/website/src/pages/cs/token-api/evm/get-holders-evm-by-contract.mdx
new file mode 100644
index 000000000000..0bb79e41ed54
--- /dev/null
+++ b/website/src/pages/cs/token-api/evm/get-holders-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: Token Holders by Contract Address
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getHoldersEvmByContract
+---
+
+The EVM Holders endpoint provides information about the addresses holding a specific token, including each holder’s balance. This is useful for analyzing token distribution for a particular contract.
diff --git a/website/src/pages/cs/token-api/evm/get-ohlc-prices-evm-by-contract.mdx b/website/src/pages/cs/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
new file mode 100644
index 000000000000..d1558ddd6e78
--- /dev/null
+++ b/website/src/pages/cs/token-api/evm/get-ohlc-prices-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: Token OHLCV prices by Contract Address
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getOhlcPricesEvmByContract
+---
+
+The EVM Prices endpoint provides pricing data in the Open/High/Low/Close/Volume (OHCLV) format.
diff --git a/website/src/pages/cs/token-api/evm/get-tokens-evm-by-contract.mdx b/website/src/pages/cs/token-api/evm/get-tokens-evm-by-contract.mdx
new file mode 100644
index 000000000000..b6fab8011fc2
--- /dev/null
+++ b/website/src/pages/cs/token-api/evm/get-tokens-evm-by-contract.mdx
@@ -0,0 +1,9 @@
+---
+title: Token Holders and Supply by Contract Address
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getTokensEvmByContract
+---
+
+The Tokens endpoint delivers contract metadata for a specific ERC-20 token contract from a supported EVM blockchain. Metadata includes name, symbol, number of holders, circulating supply, decimals, and more.
diff --git a/website/src/pages/cs/token-api/evm/get-transfers-evm-by-address.mdx b/website/src/pages/cs/token-api/evm/get-transfers-evm-by-address.mdx
new file mode 100644
index 000000000000..604c185588ea
--- /dev/null
+++ b/website/src/pages/cs/token-api/evm/get-transfers-evm-by-address.mdx
@@ -0,0 +1,9 @@
+---
+title: Token Transfers by Wallet Address
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getTransfersEvmByAddress
+---
+
+The EVM Transfers endpoint provides access to historical token transfer events for a specified address. This endpoint is ideal for tracking transaction history and analyzing token movements over time.
diff --git a/website/src/pages/cs/token-api/faq.mdx b/website/src/pages/cs/token-api/faq.mdx
new file mode 100644
index 000000000000..83196959be14
--- /dev/null
+++ b/website/src/pages/cs/token-api/faq.mdx
@@ -0,0 +1,109 @@
+---
+title: Token API FAQ
+---
+
+Get fast answers to easily integrate and scale with The Graph's high-performance Token API.
+
+## Obecný
+
+### What blockchains does the Token API support?
+
+Currently, the Token API supports Ethereum, Binance Smart Chain (BSC), Polygon, Optimism, Base, and Arbitrum One.
+
+### Why isn't my API key from The Graph Market working?
+
+Be sure to use the Access Token generated from the API key, not the API key itself. An access token can be generated from the dashboard on The Graph Market using the dropdown menu next to each API key.
+
+### How current is the data provided by the API relative to the blockchain?
+
+The API provides data up to the latest finalized block.
+
+### How do I authenticate requests to the Token API?
+
+Authentication is managed via JWT tokens obtained through The Graph Market. JWT tokens issued by [Pinax](https://pinax.network/en), a core developer of The Graph, are also supported.
+
+### Does the Token API provide a client SDK?
+
+While a client SDK is not currently available, please share feedback on any SDKs or integrations you would like to see on [Discord](https://discord.gg/graphprotocol).
+
+### Are there plans to support additional blockchains in the future?
+
+Yes, more blockchains will be supported in the future. Please share feedback on desired chains on [Discord](https://discord.gg/graphprotocol).
+
+### Are there plans to offer data closer to the chain head?
+
+Yes, improvements to provide data closer to the chain head are planned. Feedback is welcome on [Discord](https://discord.gg/graphprotocol).
+
+### Are there plans to support additional use cases such as NFTs?
+
+The Graph ecosystem is actively determining the [roadmap](https://thegraph.com/blog/token-api-the-graph/) for additional use cases, including NFTs. Please provide feedback on specific features you would like prioritized on [Discord](https://discord.gg/graphprotocol).
+
+## MCP / LLM / AI Topics
+
+### Is there a time limit for LLM queries?
+
+Yes. The maximum time limit for LLM queries is 10 seconds.
+
+### Is there a known list of LLMs that work with the API?
+
+Yes, Cline, Cursor, and Claude Desktop integrate successfully with The Graph's Token API + MCP server.
+
+Beyond that, whether an LLM "works" depends on whether it supports the MCP protocol directly (or has a compatible plugin/adapter).
+
+### Where can I find the MCP client?
+
+You can find the code for the MCP client in [The Graph's repo](https://github.com/graphprotocol/mcp-client).
+
+## Advanced Topics
+
+### I'm getting 403/401 errors. What's wrong?
+
+Check that you included the `Authorization: Bearer ` header with the correct, non-expired token. Common issues include using the API key instead of generating a new JWT, forgetting the "Bearer" prefix, using an incorrect token, or omitting the header entirely. Ensure you copied the JWT exactly as provided by The Graph Market.
+
+### Are there rate limits or usage costs?\*\*
+
+During Beta, the Token API is free for authorized developers. While specific rate limits aren't documented, reasonable throttling exists to prevent abuse. High request volumes may trigger HTTP 429 errors. Monitor official announcements for future pricing changes after Beta.
+
+### What networks are supported, and how do I specify them?
+
+You can query available networks with [this link](https://token-api.thegraph.com/#tag/monitoring/GET/networks). A full list of network IDs accepted by The Graph can be found on [The Graph's Networks](https://thegraph.com/docs/en/supported-networks/). The API supports Ethereum mainnet, Binance Smart Chain, Base, Arbitrum One, Optimism, and Polygon/Matic. Use the optional `network_id` parameter (e.g., `mainnet`, `bsc`, `base`, `arbitrum-one`, `optimism`, `matic`) to target a specific chain. Without this parameter, the API defaults to Ethereum mainnet.
+
+### Why do I only see 10 results? How can I get more data?
+
+Endpoints cap output at 10 items by default. Use pagination parameters: `limit` (up to 500) and `page` (1-indexed). For example, set `limit=50` to get 50 results, and increment `page` for subsequent batches (e.g., `page=2` for items 51-100).
+
+### How do I fetch older transfer history?
+
+The Transfers endpoint defaults to 30 days of history. To retrieve older events, increase the `age` parameter up to 180 days maximum (e.g., `age=180` for 6 months of transfers). Transfers older than 180 days cannot be fetched in a single call.
+
+### What does an empty `"data": []` array mean?
+
+An empty data array means no records were found matching the query – not an error. This occurs when querying wallets with no tokens/transfers or token contracts with no holders. Verify you've used the correct address and parameters. An invalid address format will trigger a 4xx error.
+
+### Why is the JSON response wrapped in a `"data"` array?
+
+All Token API responses consistently wrap results in a top-level `data` array, even for single items. This uniform design handles the common case where addresses have multiple balances or transfers. When parsing, be sure to index into the `data` array (e.g., `const results = response.data`).
+
+### Why are token amounts returned as strings?
+
+Large numeric values (like token amounts or supplies) are returned as strings to avoid precision loss, as they often exceed JavaScript's safe integer range. Convert these to big number types for arithmetic operations. Fields like `decimals` are provided as normal numbers to help derive human-readable values.
+
+### What format should addresses be in?
+
+The API accepts 40-character hex addresses with or without the `0x` prefix. The endpoint is case-insensitive, so both lower and uppercase hex characters work. Ensure addresses are exactly 40 hex digits (20 bytes) if you remove the prefix. For contract queries, use the token's contract address; for balance/transfer queries, use the wallet address.
+
+### Do I need special headers besides authentication?
+
+While recommended, `Accept: application/json` isn't strictly required as the API returns JSON by default. The critical header is `Authorization: Bearer `. Ensure you make a GET request to the correct URL without trailing slashes or path typos (e.g., use `/balances/evm/{address}` not `/balance`).
+
+### MCP integration with Claude/Cline/Cursor shows errors like "ENOENT" or "Server disconnected". How do I fix this?
+
+For "ENOENT" errors, ensure Node.js 18+ is installed and the path to `npx`/`bunx` is correct (consider using full paths in config). "Server disconnected" usually indicates authentication or connectivity issues – verify your `ACCESS_TOKEN` is set correctly and your network allows access to `https://token-api.thegraph.com/sse`.
+
+### Is the Token API part of The Graph's GraphQL service?
+
+No, the Token API is a separate RESTful service. Unlike traditional subgraphs, it provides ready-to-use REST endpoints (HTTP GET) for common token data. You don't need to write GraphQL queries or deploy subgraphs. Under the hood, it uses The Graph's infrastructure and MCP with AI for enrichment, but you simply interact with REST endpoints.
+
+### Do I need to use MCP or tools like Claude, Cline, or Cursor?
+
+No, these are optional. MCP is an advanced feature allowing AI assistants to interface with the API via streaming. For standard usage, simply call the REST endpoints with any HTTP client using your JWT. Claude Desktop, Cline bot, and Cursor IDE integrations are provided for convenience but aren't required.
diff --git a/website/src/pages/cs/token-api/mcp/claude.mdx b/website/src/pages/cs/token-api/mcp/claude.mdx
new file mode 100644
index 000000000000..aabd9c69d69a
--- /dev/null
+++ b/website/src/pages/cs/token-api/mcp/claude.mdx
@@ -0,0 +1,58 @@
+---
+title: Using Claude Desktop to Access the Token API via MCP
+sidebarTitle: Claude Desktop
+---
+
+## Prerequisites
+
+- [Claude Desktop](https://claude.ai/download) installed.
+- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/).
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path.
+- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version.
+
+
+
+## Konfigurace
+
+Create or edit your `claude_desktop_config.json` file.
+
+> **Settings** > **Developer** > **Edit Config**
+
+- OSX: `~/Library/Application Support/Claude/claude_desktop_config.json`
+- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
+- Linux: `.config/Claude/claude_desktop_config.json`
+
+```json label="claude_desktop_config.json"
+{
+ "mcpServers": {
+ "token-api": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": ""
+ }
+ }
+ }
+}
+```
+
+## Troubleshooting
+
+To enable logs for the MCP, use the `--verbose true` option.
+
+### ENOENT
+
+
+
+Try to use the full path of the command instead:
+
+- Run `which npx` or `which bunx` to get the path of the command.
+- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`).
+
+### Server disconnected
+
+
+
+Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable.
+
+> You can always have a look at the full logs under `Claude/logs/mcp.log` and `Claude/logs/mcp-server-pinax.log` for more details.
diff --git a/website/src/pages/cs/token-api/mcp/cline.mdx b/website/src/pages/cs/token-api/mcp/cline.mdx
new file mode 100644
index 000000000000..2e8f478f68c1
--- /dev/null
+++ b/website/src/pages/cs/token-api/mcp/cline.mdx
@@ -0,0 +1,52 @@
+---
+title: Using Cline to Access the Token API via MCP
+sidebarTitle: Cline
+---
+
+## Prerequisites
+
+- [Cline](https://cline.bot/) installed.
+- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/).
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path.
+- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version.
+
+
+
+## Konfigurace
+
+Create or edit your `cline_mcp_settings.json` file.
+
+> **MCP Servers** > **Installed** > **Configure MCP Servers**
+
+```json label="cline_mcp_settings.json"
+{
+ "mcpServers": {
+ "mcp-pinax": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": ""
+ }
+ }
+ }
+}
+```
+
+## Troubleshooting
+
+To enable logs for the MCP, use the `--verbose true` option.
+
+### ENOENT
+
+
+
+Try to use the full path of the command instead:
+
+- Run `which npx` or `which bunx` to get the path of the command.
+- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`).
+
+### Server disconnected
+
+
+
+Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable.
diff --git a/website/src/pages/cs/token-api/mcp/cursor.mdx b/website/src/pages/cs/token-api/mcp/cursor.mdx
new file mode 100644
index 000000000000..fac3a1a1af73
--- /dev/null
+++ b/website/src/pages/cs/token-api/mcp/cursor.mdx
@@ -0,0 +1,50 @@
+---
+title: Using Cursor to Access the Token API via MCP
+sidebarTitle: Cursor
+---
+
+## Prerequisites
+
+- [Cursor](https://www.cursor.com/) installed.
+- A [JWT token](/token-api/quick-start) from [The Graph Market](https://thegraph.market/).
+- [`npx`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) or [`bunx`](https://bun.sh/) installed and available in your path.
+- The `@pinax/mcp` package requires Node 18+, as it relies on built-in `fetch()` / `Headers`, which are not available in Node 17 or older. You may need to specify an exact path to an up-to-date Node version, or uninstall previous versions of Node to ensure `@pinax/mcp` uses the correct version.
+
+
+
+## Konfigurace
+
+Create or edit your `~/.cursor/mcp.json` file.
+
+> **Cursor Settings** > **MCP** > **Add new global MCP Server**
+
+```json label="mcp.json"
+{
+ "mcpServers": {
+ "mcp-pinax": {
+ "command": "npx",
+ "args": ["@pinax/mcp", "--sse-url", "https://token-api.thegraph.com/sse"],
+ "env": {
+ "ACCESS_TOKEN": ""
+ }
+ }
+ }
+}
+```
+
+## Troubleshooting
+
+
+
+To enable logs for the MCP, use the `--verbose true` option.
+
+### ENOENT
+
+Try to use the full path of the command instead:
+
+- Run `which npx` or `which bunx` to get the path of the command.
+- Replace `npx` or `bunx` in the configuration file with the full path (e.g. `/home/user/bin/bunx`).
+
+### Server disconnected
+
+Double-check your API key otherwise look in your navigator if `https://token-api.thegraph.com/sse` is reachable.
diff --git a/website/src/pages/cs/token-api/monitoring/get-health.mdx b/website/src/pages/cs/token-api/monitoring/get-health.mdx
new file mode 100644
index 000000000000..57a827b3343b
--- /dev/null
+++ b/website/src/pages/cs/token-api/monitoring/get-health.mdx
@@ -0,0 +1,7 @@
+---
+title: Get health status of the API
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getHealth
+---
diff --git a/website/src/pages/cs/token-api/monitoring/get-networks.mdx b/website/src/pages/cs/token-api/monitoring/get-networks.mdx
new file mode 100644
index 000000000000..0ea3c485ddb9
--- /dev/null
+++ b/website/src/pages/cs/token-api/monitoring/get-networks.mdx
@@ -0,0 +1,7 @@
+---
+title: Get supported networks of the API
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getNetworks
+---
diff --git a/website/src/pages/cs/token-api/monitoring/get-version.mdx b/website/src/pages/cs/token-api/monitoring/get-version.mdx
new file mode 100644
index 000000000000..0be6b7e92d04
--- /dev/null
+++ b/website/src/pages/cs/token-api/monitoring/get-version.mdx
@@ -0,0 +1,7 @@
+---
+title: Get the version of the API
+template:
+ type: openApi
+ apiId: tokenApi
+ operationId: getVersion
+---
diff --git a/website/src/pages/cs/token-api/quick-start.mdx b/website/src/pages/cs/token-api/quick-start.mdx
new file mode 100644
index 000000000000..4083154b5a8b
--- /dev/null
+++ b/website/src/pages/cs/token-api/quick-start.mdx
@@ -0,0 +1,79 @@
+---
+title: Token API Quick Start
+sidebarTitle: Rychlé spuštění
+---
+
+
+
+> [!CAUTION] This product is currently in Beta and under active development. If you have any feedback, please reach out to us on [Discord](https://discord.gg/graphprotocol).
+
+The Graph's Token API lets you access blockchain token information via a GET request. This guide is designed to help you quickly integrate the Token API into your application.
+
+The Token API provides access to onchain token data, including balances, holders, detailed token metadata, and historical transfers. This API also uses the Model Context Protocol (MCP) to enrich raw blockchain data with contextual insights using AI tools, such as Claude.
+
+## Prerequisites
+
+Before you begin, get a JWT token by signing up on [The Graph Market](https://thegraph.market/). You can generate a JWT token for each of your API keys using the dropdown menu.
+
+## Authentication
+
+All API endpoints are authenticated using a JWT token inserted in the header as `Authorization: Bearer `.
+
+```json
+{
+ "headers": {
+ "Authorization": "Bearer eyJh•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••NnlA"
+ }
+}
+```
+
+## Using JavaScript
+
+Make an API request using **JavaScript** by adding the request parameters, and then fetching from the relevant endpoint. For example:
+
+```js label="index.js"
+const address = '0x2a0c0dbecc7e4d658f48e01e3fa353f44050c208'
+const options = {
+ method: 'GET',
+ headers: {
+ Accept: 'application/json',
+ Authorization: 'Bearer ',
+ },
+}
+
+fetch(`https://token-api.thegraph.com/balances/evm/${address}`, options)
+ .then((response) => response.json())
+ .then((response) => console.log(response))
+ .catch((err) => console.error(err))
+```
+
+Make sure to replace `